Koenig1993b

Embed Size (px)

Citation preview

  • 8/3/2019 Koenig1993b

    1/13

    Composition Processes (1978)by Gottfried Michael Koenig

    By musical composition we generallyunderstand the production of an

    instrumental score or a tape ofelectronic music. However, we alsounderstand composition as the resultof composing: the scores ofinstrumental or electronic pieces, anelectronic tape, even a performance(we say for instance: "I have heard acomposition by composer X").... Theconcept of composition is accordinglyclosed with regard to the result, butopen with regard to the making of acomposition; it tells us nothing aboutpreparatory work, whether it isessential for the composition or not.Preparatory work includes the choiceof instruments and values fordynamics or durations, but it alsoincludes the definition of sounds inelectronic music and can even beextended to cover the invention ofspecial graphic symbols. Electronicsounds or graphic symbols are notalways additions to composition; theyare often "composed" themselves,i.e., put together according to aspects

    which are valid for actual composing.

    These considerations give rise to thefollowing questions:

    what do we mean by"composition"?

    do we mean the composition ofmusical language structures?

    do we mean the composition ofsound structures?

    do we mean the composition ofsingle sounds?

    To begin with the last one: can we calla single sound, especially in electronicmusic, a "composition" or at least theresult of composing? In the early daysof electronic music the Cologne studiostressed the fact that not just a workbut each of its individual sounds hadto be "composition"; by this they

    meant a way of working in which theform of a piece and the form of its

    sounds should be connected: theproportions of the piece should bereflected as it were in the proportionsof the individual sounds. It is better tocall a list of sound data having no

    direct connection with the structure ofa piece a description of the sounds. Interms of Cologne aesthetics it is thenperfectly possible to talk about thecomposition of single sounds, but thisbrings us to the next question as towhat a single sound is. The termcomes from instrumental music,where it is most closely involved withquestions of performance andnotation technique. To give atentative and rough description of thesingle sound, it is characterized by anunmistakable start ("entry") and anunmistakable end and consequentlyby an unmistakable duration,furthermore by uniform pitch,loudness and timbre. We can specifythis rough description in more detailsby the following remarks:

    timbre changes in the singlesound play such a slight partas to be negligible here

    changes of loudness in thesingle sound (crescendo,decrescendo, tremolo)generally belong toperformance or expressivecharacteristics, the abovedefinition (start, end, durationand pitch) being unaffected;sounds starting "inaudibly" ppor "dying away" to pp areexceptions which are justifiedby the general redundance ofthe context

    pitch-changes in the singlesound (glissando) restrict theabove definition more closely;we might however take intoaccount the fact that glissandifrequently occur as meretransitions between stationarysounds (especially for singersand string-players), and thatindependent glissandicontradicting harmonicunambiguity, form, likepitchless percussion sounds or

  • 8/3/2019 Koenig1993b

    2/13

    clusters, a category of theirown in which the conditions ofbeginning, end and durationare still valid.

    As this definition shows, we can onlyspeak of single sounds in instrumentalmusic really (and even then onlywithin limits) and in the first phase ofelectronic music, which was closelylinked with instrumental traditions. Inthe ensuing period of electronic musicit is better to speak, instead of thecomposition of single sounds, of thecomposition of sound-events or evensound-fields, since a sound-event isnot only assembled from pitches,degrees of loudness and durations,but includes to an increasing extenttransformations as well: the uniformcomposition of an event frequentlyresults in an auditory impressionwhose variability contradicts thedefinition of individual sound invarious parameters; the beginning,end and duration are all that are leftof the definition. These quantities alsodescribe the entire work, though,which might consequently be seen asa single, complexly modulated sound.

    As we see, considerable difficultiescan be involved in making adistinction between a composition andits individual sounds, so that we canonly answer the question as towhether we should understandcomposition as the composition ofsingle sounds in the affirmative whenthere is a continuous structuralconnection between the overall formand its parts, right down to thephysical sound data, and only thenwhen--in the sense of instrumental

    tradition--the whole can be heard toconsist of individual parts.

    Detailed discussion of the singlesound has shown that it is onlycovered by the term composition to alimited extent. The composition ofsound structures seems to fit into oursubject more appropriately. This isbecause in sound structures physicalsound data and musical structuralquantities meet. The sound structureis not tied to the narrow definition of

    the individual sound, but may, as we

    have seen, consist in the auditoryimpression of single sounds.According to its definition the soundstructure is more complex and usuallylonger than a single sound, thus more

    closely approaching a form-section,virtually the whole work. Nonethelessthe sound structure can also be saidto cover a partial aspect ofcomposition: it would either have tobe described as a more complex,assembled single sound or as anunfinished piece. However, thetechnical circumstances of working inan electronic studio or with acomputer often lead to composing insections; problems of sound structurecan therefore be treated just as well

    under the musical structures of entirepieces.

    This brings us to the last of thequestions posed before: bycomposition processes do we meanthe composition of musical languagestructures? Emphatically, yes.Composing terminates in pieces, andthe extent to which pieces are puttogether from sounds, and therelations prevailing among these

    sounds, are a matter of how acomposer works. Composition is theapplication of a grammar whichgenerates the structures of a piece,whether the composer is aware of anexplicit grammar or not. The sound-elements (I leave the question openas to whether these are single soundsor sound-events) to be composed intostructures do not have to be in anunambiguous relationship either toone another or to the structures;assembly--"composing"--always takes

    place when something big consists ofsmaller parts. In more simplifiedterms, then, we can say thatcomposition refers to elements whichneed not themselves be the subject ofcomposition; the consideration ofcomposition processes can disregardquestions of sound production; soundproduction is not interesting as acomposition process until it becomesintegral, i.e., until the structure-generating grammar refers to sounddata instead of to given sound

    elements.

  • 8/3/2019 Koenig1993b

    3/13

    We are faced with a distinctionbetween structure and sound as soonas a composer not only writes a scorebut makes a sonic realization of it aswell. This occurred for the first time in

    electronic music when not only singlesounds but entire sound structurescould be produced, particularly withthe aid of voltage control. Thecompositional rules for giving form tothe individual events as well as forconnecting them in time were notatedin wiring diagrams which could bereproduced to a certain extent in theform of studio patches. Not untildigital computers were used did itbecome possible however to executecompositional rules of any desired

    degree of complexity, limited only bythe capacity of the computer.Automatic realization of entireelectronic pieces or at least of lengthysections using voltage control systemsseems to be the exception, though,and in the field of computer musicmuch more attention appears to bepaid to problems of sound productionthan to those of composition.

    In his article A Composer's

    Introduction to Computer Music2,WILLIAM BUXTON makes a distinctionbetween composing programs andcomputer-aided composition. Asexamples for composing programs thearticle refers to HILLER's ILLIAC Suite,XENAKIS' ST programs and my ownprograms, Project One and ProjectTwo. Though this list is not complete,it is conspicuous for its brevity. Areason for this might be the practicalimpossibility of describing thecomposition process entirely in the

    form of computer programs. Althougha composer runs through more or lessfixed sequences of decisionsdetermining his personal style, andalso employs consciously chosen ruleslimiting the freedom of his decisions inthe individual case, he is still, whetherhe is aware of this or not, under theimpression of a musical traditionwhich values a composer's originalitymore highly than his skill in usingestablished patterns. A composer ismore accustomed to being influenced

    by a spontaneous idea than by

    prepared plans; he decides anddiscards, writes down and corrects,remembers and forgets, workstowards a goal; replaces it during hiswork by another--guided by criteria

    which are more likely to be found inpsychology than in music theory. Thisis why computers are more likely tobe used for purposes of composition:

    1. to solve parts of problems or tocompose shorter formalsections instead of completepieces,

    2. to try out models greatlysimplifying compositionalreality and supplying thecomposer with a basic schemewhich he can elaborate as hefeels best,

    3. to compose an individual piecefor which the composer writesa special program moreresembling a score than asolution for a number ofproblems.

    In the chapter on computer-aided

    composition, Buxton refers to theSCORE program, MUSICOMP, theGROOVE system and the PODprograms, among others. This listdemonstrates how difficult it is toseparate the actual composition of apiece of music from auxiliary actionswhich are partly predominant orsubordinate in the composition, orwhich partly overlap. Here we arefaced by the issue of whether we aregoing to understand composing as theentire process from planning via

    writing the score (or producing a tape)right up to performance, or merely asthe intellectual act of invention. If welimit ourselves to the intellectual actof invention, we speak of "composingprograms", of musical grammar, of ascore as a document of intellectualactivity, inspiration, creative powers.If on the other hand we envisage theentire process, it can be divided into anumber of single activities which canbe performed by different agents:composers, musicians, generators,

    computers, not to forget the listeners.

  • 8/3/2019 Koenig1993b

    4/13

    These auxiliary services include, as faras we are dealing with computers:

    1. the sonic realization ofpreviously fixed score data

    2. the processing of parts ofproblems using libraries ofsubprograms,

    3. the production of graphicscores or musical graphics,

    4. sound production based onsimple compositional rules, sothat, say, the sound modelsfrom a sound library areassembled to form soundstructures.

    The computer performs variousservices in these examples: in thesonic realization of score data itreplaces an electronic studio or anorchestra, whilst not being responsiblefor the score; dealing with parts ofproblems with the help of asubprogram library can be expandedto become a complete description ofthe act of composing; the production

    of graphic scores replaces the copyist,a musical graphic leaves thecompletion of a piece to--more orless--improvising players; soundproduction according to simplecompositional rules has the characterof a model both with regard to thesounds and to the combinatorialmethods--the same group of soundscan be subjected to differentcombinations or different groups ofsounds can be arranged similarly.There are advantages and drawbacks

    to distinguishing composing programsand computer-aided composition. Anadvantage is that in the case ofcomposing programs the computer isexpected to supply the compositionalwork, whilst in computer-aidedcomposition the responsibility isentirely the composer's. Aconsideration of compositionprocesses might therefore be limitedto composing programs. A drawbackis that the composition processconsists of activities which cannot be

    separated into main and auxiliary

    activities so easily; even in composingprograms the composer is still chieflyresponsible because he must at leastprepare the data on which thecomposition process basically

    depends, if he does not in fact alsowrite the program himself. In whatfollows I shall limit myself mainly tothe invention of music in the form ofrepresentative models, without goinginto the distinction between main andauxiliary actions.

    We have been occupied withprogrammed music at the Institute ofSonology at Utrecht University since1965; by programmed music we meanthe establishment andimplementation of systems of rules orgrammars, briefly: of programs,independent of the agent setting up orusing the programs, independent tooof sound sources. This means thatprogrammed music covers:

    1. instrumental scores which acomposer writes at his desk onthe basis of bindingcompositional rules, which donot fundamentall differ from

    computer programs,

    2. electronic compositions whichlike the said instrumentalscores are systematicallycomposed, then to be"mechanically", i.e. withoutadditions and cuts, realized onstudio apparatus,

    3. electronic works producedautomatically by the use ofstudio patches,

    4. instrumental scores based oncomputer programs,

    5. tapes based on sound datawhich were calculated andconverted by a computerprogram.

    In this field of programmed music,instrumental and electronic pieceshave been realized with and withoutthe computer, and for some years our

    lecture schedule has included a series

  • 8/3/2019 Koenig1993b

    5/13

    of lectures with this title alongside thesubject of computer sound synthesiswhich does of course occasionallyoverlap the first one. The experiencewe have gained during the years and

    which I assume is fairly similar toexperience gained elsewhere can besummarized as follows:

    THECOMPOSINGPROCESS

    Opinions differ as to what acomposing process is, there being allgradations between constructive andintuitive composers. Investigations inthe field of programmed music canonly be expected from composers whoalready have highly constructiveinclinations or previous knowledge or,although keener on free expression,want to discover a new realm ofexperience. Among composers withconstructive inclinations one oftenobserves a tendency towardsprocesses which to a fair extentexclude compositional decisions madein advance, i.e. the input of structure-conditioning data. They prefer tochoose what corresponds to theirpersonal taste from among the

    automatically produced results. If, forinstance, there is a choice betweentwo composing programs withdiffering input formats, the programwith the smaller input format is morelikely to be chosen. It is often onlytaken as a model for a composer'sown program which gets by with evenfewer data. Syntactic rules arereplaced as far as possible by randomdecisions.

    The other extreme occurs too, but

    rarely. Here composers mistrust theautomaton or chance, and try tocontrol the process down to the lastdetail. This leads to programs withlarge input formats, with a detaileddialogue between composer andcomputer, and to the composition andfrequent correction of smaller andsmallest form-sections. We againobserve here the smooth transitionfrom composing programs tocomputer-aided composition. I amspeaking here, incidentally, of

    tendencies observed in composers

    working at our institute, who afterbecoming acquainted with existingprograms go their own way andcontrive their own composingsystems, occasionally taking years

    over them. There is no doubt as to thepopularity which systems forcomputer-aided composition generallyenjoy, but this is not covered by mysubject.

    The fundamental difficulty indeveloping composing programs isindubitably in determining thedividing-line between the automaticprocess and the dynamic influenceexerted by the composer by means ofinput data and dialogue. To put itbriefly: when there are few data andlittle dialogue, automata are expectedto produce the composition; whenthere are a lot of data and dialogue,the responsibility remains thecomposer's, the computer beingdegraded to an intelligent typewriter.The dividing-line between composerand automaton should run in such afashion as to provide the highestdegree of insight into musical-syntactic contexts in the form of the

    program, it being up to the composerto take up the thread--metaphoricallyspeaking--where the program wasforced to drop it, in other words: tomake the missing decisions which itwas not possible to formalize. Thedevelopment of composing programsconsists of pushing this dividing-linefurther and further away. Whoevergoes along with this personal opinionof mine will realize the difficultiesinvolved in this approach. For theattempt to formalize is not only

    oriented towards a medium--music--which, as opposed to natural languagetends to unravel rather than to fix;every composer will moreoverimagine the dividing-line to be inother areas, depending on his stylisticcriteria and expressive requirements.The computer program with which acomposer is confronted may pose hima puzzle instead of solving it.

    COMPOSITIONALRULES

  • 8/3/2019 Koenig1993b

    6/13

  • 8/3/2019 Koenig1993b

    7/13

    Another issue closely linked with thatof rules is the compositional resultachieved with composing programs.Rules abstracted from music bymeans of analysis, introspection or

    model construction result primarily inthe acoustic (or graphic) equivalent ofthis abstraction; the relation to musichas to be created again. This, too, canbe done in different ways. I expresslymention this retranslation because itshould be already kept in mind whena composing program is beingdesigned. There are various ways ofdoing this too; I shall deal with threeof them here.

    One possible evaluation is thecomparison with precedents which areto be imitated by means of theprogram or suitable input data. Thisparticularly applies to programswritten on the basis of extensiveanalysis of existing music. Apart fromthe trivial question as to whether theprogram is carrying out the givenrules correctly so that the composedresult contains the desired quantitiesin the desired combinations, it wouldbe good to examine whether, when

    listening to the results, there is anaesthetic experience comparable tothe precedent. I use this vague term,aesthetic experience, to designate thequality which distinguishes, say, awritten score from its performance, ora composer's material from theconstellations in which it eventuallyappears in his music.

    Another evaluation refers to theexpectations of the writer or user ofthe program. Especially in the case of

    the already mentioned introspection,this does not involve reviewingexisting aesthetic products, but in away looking forwards for ideals toinspire a composer in his work. Theresult of such an evaluation dependson goals which do not refer toprecedents with which they can becompared. This evaluation isconsequently less communicable thanthe first one involving the formalizablecomparison of original and copy.

    A third evaluation is to look formusically experienceable referencesin a context produced by means ofmodels. Since the model merelymarks out the framework within which

    aesthetically communicable contextsare presumed to be, the resultsproduced by means of the modelcannot be compared with precedents,nor with ideals. They appeal rather tothe evaluator's capacity fordiscovering what is special in what isgeneral, i.e. the accumulation of whatis significant in surroundings whosesignificance is at the most latent. Asalready explained, it is up tomethodical experimentation todetermine the validity-range of the

    model--and thus the probability ofaccumulating aestheticallycommunicable constellations; onemust not forget that the models onlydescribe basic structures which willneed detailed elaboration in the formof a score or tape. What I have beensaying here obviously also applies tothe evaluation of precedents andexpectations.

    COMPOSITIONALMETHODS

    I shall now turn to some compositionalmethods which are due tointrospection or which might be usefulin constructing models, but which inany case represent generalizations ofthe concrete process of composing.Note, though, that they remain withinthe range of experience of thecomposer writing, or just using, acomputer program.

    Interpolation might be a good name

    for a method which so to speakpushes forwards from the outer limitsof the total form into the inner areas;applied to the dimension of time thiswould mean: dividing the totalduration into sections, the sectionsinto groups, the groups into sub-groups and so on, until the durationsof the individual sounds can beestablished. We could apply thismethod accordingly to otherdimensions too, by speaking ofaspects, partial aspects, variants and

    modifications.

  • 8/3/2019 Koenig1993b

    8/13

    By contrast, extrapolation wouldproceed from the interior towards theoutside: from the individual sound tothe group of sounds, thence to thesuper-groups, via sections to the total

    form. Both methods are concentric;the formal shells which so to speakenclose the nucleus of the form existin ideal simultaneity; the form is notunfolded teleologically but ratherpedagogically, the details beingpresented in such a way that therelation of the detail to the whole isalways quite clear to the listener.

    As opposed to these two methods ofinterpolation and extrapolation thereis a third which I should like to callchronological-associative. Thecomposing process unfolds along thetime-axis, thus being put in theposition of the ideal listener. Note thatin this way every event is given itsirremovable place in time, whereas inthe previous examples of interpolationand extrapolation the events wereinterchangeable.

    A combination of methods moreoriented towards time or space can be

    found in the composition of blocks; bya block I mean a part of a structurewhich requires complementing byother blocks but which is stillcomplete in itself. It is easier to staterules for blocks than for entire pieces,because they are of shorter durationand do not have to meet the demandsmade on pieces. Individual blocks canbe produced by means ofinterpolation, extrapolation or thechronological-associative method;their order is determined by the

    composer, i.e. outside the scope ofthe formalization in the program.

    The chronological-associative methodcan finally be extended to theteleological or goal-oriented methodby means of feedback. Here thecomposer supplements individual dataand syntactic rules describing onlylocal strategy by objectives with whichlocal events are continually compared.This type of method seems toapproach most closely the real

    process of composition, but it also

    involves the greatest difficulties ofrepresentation in program structures.

    PRACTICALASPECTS

    To close this section, I shall talk abouta few practical aspects of writingcomposing programs, theiraccessibility and the forms of dataoutput.

    The writer of a composing programmust first of all clearly define thepoints of departure and goals. Pointsof departure are chiefly in the relationbetween computer and composer, i.e.between the musical knowledgestored in the program and the inputdata the composer uses to manipulatethis knowledge. Goals are related tothe extent and kind of the expectedresults. The definition of thecompositional method is alsoimportant; for instance, rules,probability matrices, weightingfactors, chance etc. have to be takeninto account.

    The accessibility of composingprograms is primarily a question of

    the available computer system: howmuch computer time can be given tousers, either in the single-user ortime-sharing mode; furthermore it is aquestion of program construction:whether input data must be read in orwhether the composer in the course ofa dialogue with the computer cancontinually influence the program;accessibility also depends on turn-around time, i.e. on how long it takesfor the composer to receive theoutput; it is finally a question of the

    program language if only asubprogram library is available andthe composer has to write his ownmain program.

    Data is usually output in the form oftables, musical graphics or sounds.Tables sometimes need to belaboriously transcribed into musicalnotation, musical graphics arerestricted to standard notation; it isvery practical to have a sound outputof a composed text giving the

    composer a first impression in the

  • 8/3/2019 Koenig1993b

    9/13

    three parameters of pitch, loudnessand duration, before he decides tohave tables printed or musicalgraphics executed. Things aredifferent with systems which do not

    produce a score but only a soundresult; the above-mentioned criteria ofaccessibility play an important parthere.

    To round off this paper on compositionprocesses I shall deal in more detailwith a few programs developed or inthe process of being developed at theInstitute of Sonology. I shall classifythem as composing programs forlanguage structures (instrumentalmusic) sound-generating programs inthe standard approach, sound-generating programs in the non-standard approach, program-generating systems based ongrammars.

    COMPOSINGPROGRAMSFORLANGUAGESTRUCTURES

    From 1964 to 1966 I wrote acomposing program myself. I regard itas a first attempt, and therefore called

    it Project One, abbreviated in PR14. Ithas had a lively history: after its firstversion in FORTRAN II, tried out on anIBM 7090, I made an ALGOL 60version for an ELECTROLOGICA X8computer at the computer departmentof Utrecht University. When theInstitute of Sonology acquired its owncomputer in 1971, I made a FORTRANIV version of the program for our PDP-15. Still, a few years later, after sixVOSIM generators had been builtaccording to Kaegi's model, I gave

    Project One a sound output making itpossible for a composition to be heardin the three dimensions of time, pitchand loudness.

    I had the idea of collating myexperience with programmed music atthe desk and in the electronic studioto form a model which would be ableto produce a large number of variantsof itself almost fully automatically.Faithful to the fundamentals of thenineteen-fifties, all the parameters

    invblved were supposed to have at

    least one common characteristic; forthis I chose the pair of terms,"regular/irregular". "Regular" meanshere that a selected parameter valueis frequently repeated: this results in

    groups with similar rhythms, octaveregisters or loudness, similarharmonic structure or similarsonorities. The duration of suchgroups is different in all parameters,resulting in over-lappings."Irregularity" means that a selectedparameter value cannot be repeateduntil all or at least many values of thisparameter have had a turn. Thechoice of parameter values and groupquantities was left to chance, as wasthe question of the place a given

    parameter should occupy in the rangebetween regularity and irregularity. Acomposer using this program only hasto fix metronome tempi, rhythmicvalues and the length of thecomposition, in other words: he onlydecides on the time framework of theresult, and this only roughly, becauseall details are generated by theautomatism of the program.

    Experiments on a large scale were not

    made with this program, however,because it is a laborious and time-consuming business to transcribe thetables printed by the computer intonotation; the turn-around time wastoo long as well, as long as theprogram was still running at theUniversity computer centre. Not untilthe program was installed on our owncomputer was it possible for acomposer to produce several variantsin a similar length of time andcompare them; but by then I had

    already written "Project Two", which Ishall discuss presently, and whichallows the composer to have moreinfluence on the composition process."Project One" therefore gathered dustuntil--about a year ago--we could buildsix VOSIM generators which playcomposers their results in real time.The large number of experimentswhich could be made in a short timerevealed a second "dividing line"; thefirst one, mentioned earlier, separatesthe independent achievements of the

    program from the composer's

  • 8/3/2019 Koenig1993b

    10/13

    possibilities of influencing it. Thesecond one, discovered with thesound results of "Project One",separates the significance of randomdecisions in the micro and macro

    range of the form. In my first design ofPRl I was led by the idea that since thedetails of the form already depend onchance, although within certain limits,the overall form can be subjected tochance too as long as care is takenthat the various aspects of the formdescribed by the program are reallygiven the opportunity. More recentexperiments with the program haveshown, however, the extent to whichthe rules for the overall form affectthe course of the form in detail; it

    appears that the comprehension of alistener horizontally combining singletones to groups of tones, groups oftones to larger units, and verticallyobserving the changing structure ofdifferent durations, pitches andloudnesses and connecting these datain an as yet uninvestigated mannerwith impressions, changing in time, ofmusical states, is increased wheneither the details of such a state orthe states themselves arouse

    expectations which can be fulfilled inthe further course of developments. Inorder to observe this phenomenonmore closely, I first created apossibility of revising, by means ofuser data, the random decisions forthe overall form which had been madeby the program. The results areencouraging and indicate ways ofdefining more precisely the said"dividing line" between the micro andthe macro-form. The reverse methodis in preparation too, by which the

    user will have greater influence on theconstruction of the detail.

    This work on "Project One" took up thetime which I had really intended tospend on a new version of "ProjectTwo". Since this second composingprogram has already beenexhaustively documented in theElectronic Music Reports, published byour Institute, I can make do with abrief summary here.5As I havealready mentioned, PR2 was written

    shortly after I had finished PR1, about

    1966 to 1968. Up to now it has run inan ALGOL 60 version at the Universitycomputer department, and thereforesuffered from the same problems asthe first project. I am at present

    translating the ALGOL program into aFORTRAN version which will be able torun on our own PDP15 and be given aVOSIM sound output. I am alsoplanning an extended and improvedversion.

    Two properties chiefly characterize"Project Two". On the one hand theuser is expected to supply a lot ofinput data not only defining the value-ranges in eight parameters but alsomaking the parametersinterdependent; on the other hand theindividual decisions within the form-sections are not made to depend onchance, as in PRl, but on selectionmechanisms specified by thecomposer. PR2, like PRl, realizes theidea of a form-model which can betested. in any number of variants.Both programs, PR2 more so than PRl,require careful preliminary work fromthe composer, since they are notinteractive. The planned new version

    of PR2 will however make it possiblefor the computer and composer tohold long dialogues.

    SOUNDPROGRAMSINTHESTANDARDAPPROACH

    The question as to compositionprocesses inevitably leads to that ofthe construction of the sounds incomposed structures, inasfar as thelatter did not precede the former. Insound- generating computer programswe distinguish, as proposed by

    Holtzman,6between the "standard"and the "non-standard" approach. Toquote Holtzman: "Standardapproaches are characterized by animplementation process where, givena description of the sound in terms ofsome acoustic model, machineinstructions are ordered in such a wayso as to simulate the sound described;the non-standard approach, given aset of instructions, relates them oneto another in terms of a system whichmakes no reference to some super-

    ordinated model, (...) and the

  • 8/3/2019 Koenig1993b

    11/13

    relationships formed are themselvesthe description of the sound."Standard systems are seen as more orless "top-down" systems where thesynthesis technique is conceived of as

    manipulated in terms of a givenacoustic model. In digital synthesis,programs developed by M. Mathews,i.e. Music IV-V, exemplify the standardapproach to sound synthesis and formthe basis of other major synthesisprograms, e.g., Vercoe's Music 360,Howe's Music 4BF etc.7The VOSIMsound output to PR1 and PR2 alsobelong to standard systems, and so doprograms for a digital hardwareFourier generator, two digitalhardware frequency modulation

    generators (after Chowning's model8)and Kaegi's MIDIM system. This listshould also include Truax's POD5 andPOD6--which there is not enough timeto examine. more closely here. I amalso skipping the Fourier generatorand a program written by WilliamMatthews for using FMS generators.The said programs are chiefly for puresound production and will not be ableto take part in the composition oflanguage structures until they are

    embodied in suitable composingprograms. Unfortunately I cannot sayvery much about Kaegi's MIDIMprogram, since at the time of writingthis paper his manual was not yetavailable. It is based--as a sound-generating program--on the VOSIMsystem9for the minimal descripti6n ofspeech sounds, which has since beenexpanded to apply to instrumentalsounds. At the same time, however, itis a transition to composing systems,only that here one proceeds from the

    sound to the composition instead ofthe other way round--as far as I know,this is a unique case. This transition iscaused by having a library ofinstrument definitions continuouslycompared with the structure-generating grammar.

    SOUNDPROGRAMSINTHENON-STANDARDAPPROACH

    Among "non-standard" systems, asproduced at the Institute of Sonology,

    Paul Berg's "Pile" and my own SSP can

    be named. (Kees van Prooijen's CYCLEprogram is so similar to PILE that it issufficient to mention it.) To clarifythese I quote another passage fromHoltzman's article: "... Samples are

    related only one to another, therelationships created determining thetimbre, frequency, etc.; related onlyone to another suggests that therelationships are diacritically definedand do not refer to some super-ordinate model or function. Forexample, given a set of possiblerelationships that may exist betweensamples in a digital computer, oneconsiders the relationships only interms of computer instructions - i.e.they may be related by, and only by,

    machine-instructions which can alterthe state of a certain register, e.g. theaccumulator. In such a system, onesample may be related to the previoustwo samples as the result of their`XORing'. The samples are conceivedof in terms of machine-instructionsrather than on the basis of someacoustic theory."10

    Holtzman example of two samplesonly related by an XOR refers to Paul

    Berg's PILE compiler.11The PILElanguage was written following thedevelopment of more than 20 ASPprograms in which random operations,referring to the accumulator, and alsoarithmetical and logical operations,were represented systematically. Thecomputer acts as a sound-generatinginstrument sui generis, not imitatingmechanical instruments or theoreticalacoustic models. PILE, which is verypopular among our students, hasinstructions such as BRANCH,

    CHOOSE, CHECK, SWITCH, STORE,SELECT, CONVERT, SEED and similarones, which are translated by the PILEcompiler into the assembly languageof the computer so that students canalso study the application of machine-language for sound production inpractical examples. Typical of thisnon-standard approach is that astudent or composer has hardly anypossibility of describing concretemusical quantities such as pitches,timbres or loudnesses and arranging

    them in time. Instead he must try to

  • 8/3/2019 Koenig1993b

    12/13

    describe and order elements ofmusical language such as short, long,uniform, varied, contrast, silence,similar, dissimilar, transition and thelike, these terms referring to the

    microstructure of the sound and to themacro-structure of the form.

    The Sound Synthesis Program (SSP) Idesigned in 1972 proceeds fromcomparable viewpoints. In this non-standard approach samples are notgenerated by random orarithmetical/logical machineoperations, but collected in soundsegments which in their turn aretaken from separate amplitude andtime lists. The selection of amplitudeand time values is made according toprinciples originating in PR2. Thenumber-pairs in a segment designateturning-points of the oscillation curvewhich are interpolated linearly in realtime during sound production. Thenumber of segments which acomposer can define is limited only bythe capacity of the core memory; theorder of segments is free within theframework of the same selectionprinciples according to which the

    segments were produced. At theInstitute we are considering thedevelopment of a special digitally-controlled generator which will call thesound segments from disk files, thusdoing away with the limitations of thecore memory. It would then bepossible to realize the concept onwhich SSP is based: to describe thecomposition as one single sound, theperception of which is re-presented asa function of amplitude distribution intime as sound and silence, soft and

    loud, high and low, rough and smooth.

    PROGRAM-GENERATINGSYSTEMS

    I can be brief on the subject ofprogram-generating systems, becausetheir development is in full swing atpresent, and there are no tangibleresults as yet. Still, investigation intocom-posing programs and soundprograms have consequences whichare beyond the scope of the individualcomposition or construction of sounds.

    Although composing programs do

    contain fundamental statementsabout musical language systems, aswell as personal strategies, they haveneither been systematized, nor dosuch composing programs permit

    systematic research. It looks asthough a super-individual approachmust be found.

    Steve Holtzman, of the ArtificialIntelligence Department at theUniversity of Edinburgh has beenlooking into this problem recently, andhas described his work at Edinburghand Utrecht in various articles. I shallclose my paper by quoting a fewpassages from these articles.

    "The basic proposition is that music beconsidered as a hierarchical systemwhich is characterised by dynamicbehaviour. A `system' can be definedas a collection of connected objects...The relation between objects whichare themselves interconnections ofother objects define hierarchicalsystems...

    It should be made clear that in talkingof the meaning of music or language,

    it is not necessary to have a referentfor each sign... Meaning is a questionof "positional" value. One is notconcerned with the "idea" or referentas objects but rather "with valueswhich issue from a system". Thevalues correspond to cultural units butthey can be defined as puredifferences... "Units of meaning" arenot defined referentially butstructurally--diacritically. We can lookat a structure that consists of(diacritically) defined units in a

    complex of transformations andrelations. Meaning becomes not "whatunits say/refer to/etc." but "what theydo"--a question of function in astructure." 12

    "In recent research, we have beendeveloping a machine which itself, i.e.automatically, can generate programtext to synthesize distinctive soundsand control-programs to manipulatesmaller chunks of program. Themachine approaches sound synthesis

  • 8/3/2019 Koenig1993b

    13/13

    in the so-called non-standardmanner...

    The program generator at present(1978, JS) occupies 5K core, with

    remaining free core (i.e. 20K)available for object text. It isimplemented on a PDP-15 andrequires a dedicated system. Specialhardware used is some digital-to-analog converters with 500 nano-second response time, a hardwarerandom number generator and ahardware arithmetic unit.

    The program works in a "bottom-up"fashion first writing small chunks oftext (in compiled machine-code) thatcreate distinctive sounds, then writingcontrol-functions to manipulate thesesound-producing programs to create"phrases" of juxtaposed sounds,again, control-programs for thesubordinated phrase-programs togenerate larger structures, and so on.The synthesis system is hierarchicaland consists of a number of distinctlevels, each in turn subordinated toanother...

    Over all the rules presides what wecall the complexity factor. Therelations between the parametervalues must interact in a mannerwhich is within the bounds of anevaluation of their complexitymeasure... The complexity evaluation,for example, considers the number ofsamples that compose the sound, thelarger the number of samples themore complex the wave is said to be,and similarly, the more operators usedor the more variables, the greater will

    be the complexity; At present (...) weare trying to develop an algorithmwhich might be said to embody anunderstanding of these relationshipsand which could be used as the basisfor a grammar to generate a grammar(which in turn generates a soundproducing function)"

    Notes:

    Buxton,W., A ComposersIntroduction to Computer Music,

    Interface 6/2, Amsterdam and Lisse,1977.

    Holtzman, S.R., A Description of anAutomated Digital Sound Synthesis

    Instrument, unpublished manuscript,April 1978.

    Mathews, M., The Technology ofComputer Music, Cambridge, M.I.T.Press, 1969.

    Vercoe, B., The MUSIC 360 Languagefor Sound Synthesis, American Societyof University Composers Proceedings(1971).

    Vercoe, B., Reference Manual for the !USIC 360 Language for Digital SoundSynthesis, Cambridge, unpublishedmanuscript, Studio for ExperimentalMusic, M.I.T., 1975.

    Chowning, J.M., The Synthesis ofComplex Audio Spectra by Means ofFrequency Modulation, J.A.E.S. 21,7(1973).

    Kaegi, W., Tempelaars, S., VOSIMANew Sound Synthesis System, J.A.E.S.

    26,6 (1978).

    Berg, P., PILE2A Description of theLanguage, Utrecht, unpublishedmanuscript, Institute of Sonology,January 1978.

    Berg, P., A Users Manual for SSP,Utrecht, unpublished manuscript,Institute of Sonology, May 1978.

    Holtzman, S.R., Music as System, DAIWorking Paper 26, Department of

    Artificial Intelligence, University ofEdinburgh, April 1978.