|BALANCING SOLITAIRE GAMES
Conclusions on Artificial Smarts
Actors and Indirect Relationships
SMOOTH LEARNING CURVES
THE ILLUSION OF WINNABILITY
Every artist develops her own special techniques and ideals for the execution of her art. The painter worries about brush strokes, mixing of paint, and texture; the musical composer learns techniques of orchestration, timing, and counterpoint. The game designer also acquires a variety of specialized skills, techniques, and ideals for the execution of her craft. In this chapter I will describe some of the techniques that I use. Top
A solitaire game pits the human player against the computer. The computer and the human are very different creatures; where human thought processes are diffuse, associative, and integrated, the machineís thought processes are direct, linear, and arithmetic. This creates a problem. A computer game is created for the benefit of the human, and therefore is cast in the intellectual territory of the human, not that of the computer. This puts the computer at a natural disadvantage. Although the computer could easily whip the human in games involving computation, sorting, or similar functions, such games would be of little interest to the human player. The computer must play on the humanís home turf, something it does with great difficulty. How do we design the game to challenge the human? Four techniques are available: vast resources, artificial smarts, limited information, and pace. Top
This is by far the most heavily used technique for balancing a game. The computer is provided with immense resources that it uses stupidly. These resources may consist of large numbers of opponents that operate with a rudimentary intelligence. Many games use this ploy: SPACE INVADERS, MISSILE COMMAND, ASTEROIDS, CENTIPEDE, and TEMPEST are some of the more popular games to use this technique. It is also possible to equip the computer with a small number of opponents that are themselves more powerful than the human playerís units, such as the supertanks in BATTLEZONE. The effect in both cases is the same: the human playerís advantage in intelligence is offset by the computerís material advantages.
This approach has two benefits. First, it gives the conflict between the human and the computer a David versus Goliath air. Most people would rather win as apparent underdog than as equal. Second, this approach is the easiest to implement. Providing artificial intelligence for the computerís players can be difficult, but repeating a process for many computer players takes little more than a simple loop. Of course, the ease of implementing this solution carries a disadvantage: everybody else does it. We are knee-deep in such games! Laziness and lack of determination have far more to do with the prevalence of this technique than game design considerations. Top
The obvious alternative to the use of sheer numbers is to provide the computer player with intelligence adequate to meet the human on equal terms. Unfortunately, artificial intelligence techniques are not well enough developed to be useful here. Tree-searching techniques have been developed far enough to allow us to produce passable chess, checkers, and Othello players. Any other game that can be expressed in direct tree-searching terms can be handled with these techniques. Unfortunately, very few games are appropriate for this treatment.
An alternative is to develop ad-hoc artificial intelligence routines for each game. Since such routines are too primitive to be referred to as "artificial intelligence", I instead use the less grandiose term "artificial smarts". This is the method I have used in TANKTICS, EASTERN FRONT 1941, and LEGIONNAIRE, with varying degrees of success. This strategy demands great effort from the game designer, for such ad-hoc routines must be reasonable yet unpredictable.
Our first requirement of any artificial smarts system is that it produce reasonable behavior. The computer should not drive its tanks over cliffs, crash spaceships into each other, or pause to rest directly in front of the humanís guns. In other words, obviously stupid moves must not be allowed by any artificial smarts system. This requirement tempts us to list all possible stupid moves and write code that tests for each such stupid move and precludes it. This is the wrong way to handle the problem, for the computer can demonstrate unanticipated creativity in the stupidity of its mistakes. A better (but more difficult) method is to create a more general algorithm that obviates most absurd moves.
A second requirement of an artificial smarts routine is unpredictability. The human should never be able to second-guess the behavior of the computer, for this would shatter the illusion of intelligence and make victory much easier. This is may seem to contradict the first requirement of reasonable behavior, for reasonable behavior follows patterns that should be predictable. The apparent contradiction can be resolved through a deeper understanding of the nature of interaction in a game. Three realizations must be combined to arrive at this deeper understanding. First, reaction to an opponent is in some ways a reflection of that opponent. A reasonable player tries to anticipate his opponentís moves by assessing his opponentís personality. Second, interactiveness is a mutual reaction---both players attempt to anticipate each otherís moves. Third, this interactiveness is itself a measure of "gaminess". We can combine these three realizations in an analogy. A game becomes analogous to two mirrors aligned towards each other, with each player looking out from one mirror. A puzzle is analogous to the two mirrors being unreflective; the player sees a static, unresponsive image. A weakly interactive game is analogous to the two mirrors being weakly reflective; each player can see and interact at one or two levels of reflection. A perfectly interactive game (the "gamiest game") is analogous to the two mirrors being perfectly reflective; each of the two players recursively exchanges places in an endless tunnel of reflected anticipationís. No matter how reasonable the behavior, the infinitely complex pattern of anticipation and counter-anticipation defies prediction. It is reasonable yet unpredictable.
Unfortunately, a perfectly interactive game is beyond the reach of microcomputers, for if the computer is to anticipate human moves interactively, it must be able to assess the personality of its opponents---a hopeless task as yet. For the moment, we must rely on more primitive guidelines. For example, my experience has been that algorithms are most predictable when they are "particular". By "particular" I mean that they place an emphasis on single elements of the overall game pattern. For example, in wargames, algorithms along the lines of "determine the closest enemy unit and fire at it" are particular and yield predictable behavior.
I have found that the best algorithms consider the greatest amount of information in the broadest context. That is, they will factor into their decision-making the largest number of considerations rather than focus on a small number of particular elements. To continue with the example above, a better algorithm might be "determine the enemy unit posing the greatest combination of threat and vulnerability (based on range, activity, facing, range to other friendly units, cover, and sighting); fire on unit if probability of kill exceeds probability of being killed".
How does one implement such principles into specific algorithms? I doubt that any all purpose system. can ever be found. The best general solution I have found so far for this problem utilizes point systems, field analysis, and changes in the game structure.
First, I establish a point system for quantifying the merit of each possible move. This is a time-honored technique for many artificial intelligence systems. A great deal of thought must go into the point system. The first problem with it is one of dynamic range: the designer must insure that the probability of two accessible moves each accumulating a point value equal to the maximum value allowed by the word size (eight bits) approaches zero. In other words, we canít have two moves each getting a score of 255 or we have no way of knowing which is truly the better move. This problem will diminish as 16-bit systems become more common.
A second problem with the point system is the balancing of factors against each other. In our hypothetical tank game used above, we agree that climbing on top of a hill is good, but we also agree that moving onto a road is good. Which is better? If a hilltop position is worth 15 points, what is a road position worth? These questions are very difficult to answer. They require a deep familiarity with the play of the game. Unfortunately, such familiarity is impossible to attain with a game that has yet to be completed. The only alternative is broad experience, intimate knowledge of the situation being represented, painstaking analysis, and lots of experimenting.
A second element of my general approach to artificial smarts is the use of field analysis. This is only applicable to games involving spatial relationships. In such games the human relies on pattern recognition to analyze positions and plan moves. True pattern recognition on the level of human effort is beyond the abilities of a microcomputer. However, something approaching pattern recognition can be attained through the use of field analysis. The key effort here is the creation of a calculable field quantity that correctly expresses the critical information needed by the computer to make a reasonable move. For example, in several of my wargames I have made use of safety and danger fields that tell a unit how much safety or danger it faces. Danger is calculated by summing the quotients of enemy unitsí strengths divided by their ranges; thus, large close units are very dangerous and small distant units are only slightly dangerous. A similar calculation with friendly units yields a safety factor. By comparing the danger value at its position with the safety value at its position, a unit can decide whether it should exhibit bold behavior or timid behavior. Once this decision is made, the unit can look around it and measure the net danger minus safety in each position into which the unit could move. If it is feeling bold, it moves towards the danger; if it is feeling timid, it moves away. Thus, the use of fields allows a unit to assess a spatial array of factors.
Another technique for coping with artificial smarts problems is so simple that it seems like cheating: change the game. If an element of the game is not tractable with artificial reckoning, remove it. If you canít come up with a good way to use a feature, you really have no choice but to delete it. For example, while designing TANKTICS, I encountered a problem with lakes. If a lake was concave in shape, the computer would drive its tanks to the shore, back up, and return to the shore. The concave lake created a trap for my artificial smarts algorithm. I wasted a great deal of time working on a smarter artificial smarts routine that would not be trapped by concave lakes while retaining desirable economies of motion. After much wasted effort I discovered the better solution: delete concave lakes from the map.
Ideally, the experienced game designer has enough intuitive feel for algorithms that she can sense game factors that are intractable and avoid them during the design stages of the game. Most of us must discover these things the hard way and retrace our steps to modify the design. Experiencing these disasters is part of what provides the intuition.
A special problem is the coordination of moves of many different units under the control of the computer. How is the computer to assure that the different units move in a coordinated way and that traffic jams donít develop? One way is to use a sequential planning system coupled with a simple test for the position of other units. Thus, unit #1 moves first, then #2, then #3, with each one avoiding collisions. I can assure you from my own experience that this system replaces collisions with the most frustrating traffic jams. A better way uses a virtual move system in which each unit plans a virtual move on the basis of the virtual positions of all units. Hereís how it works: we begin with an array of real positions of all computer units. We create an array of virtual positions and initialize all virtual values to the real values. Then each unit plans its move, avoiding collisions with the virtual positions. When its move is planned, it places its planned final position into the virtual array. Other units then plan their moves. After all units have planned one virtual move, the process repeats, with each unit planning its move on the basis of the interim virtual move array. This huge outer loop should be convergent; after a sufficient number of iterations the routine terminates and the virtual positions form the basis of the moves made by the computerís units. This technique should be useful for coordinating the moves of many units and preventing traffic jams.
No matter how good an algorithm is, it has a limited regime of applicability. The odds are that a specific algorithm will work best under a narrow range of conditions. A good game design must offer a broad range of conditions to be truly interesting. Thus, the designer must frequently create a number of algorithms and switch from one to another as conditions change. The transition from one algorithm to another is fraught with peril, for continuity must be maintained across the transition. I well remember a frustrating experience with algorithm transitions with LEGIONNAIRE. The computer-barbarians had three algorithms: a "run for safety" algorithm, an "approach to contact" algorithm, and an "attack" algorithm. Under certain conditions a barbarian operating under the "approach to contact" algorithm would decide on bold behavior, dash forward to make contact with the human, and make the transition to the "attack" algorithm, which would then declare an attack unsafe. The barbarian would thus balk at the attack, and convert to the "run for safety" algorithm, which would direct it to turn tail and run. The human player was treated to a spectacle of ferociously charging and frantically retreating barbarians, none of whom ever bothered to actually fight. I eventually gave up and re-designed the algorithms, merging them into a single "advance to attack" algorithm with no transitions.
The artificial smarts techniques I have described so far are designed for use in games involving spatial relationships. Many games are non-spatial; other artificial smarts techniques are required for such games. One of the most common types of non-spatial games uses systems that behave in complex ways. These games often use coupled differential equations to model complex systems. LUNAR LANDER, HAMMURABI, ENERGY CZAR, and SCRAM are all examples of such games. The primary problem facing the designer of such games is not so much to defeat the human as to model complex behavior. I advise the game designer to be particularly careful with games involving large systems of coupled differential equations. HAMMURABI uses three coupled first-order differential equations, and most programmers find it tractable. But the complexity of the problem rises very steeply with the number of differential equations used. ENERGY CZAR used the fantastic sum of 48 differential equations, a feat made believable only by the fact that many constraints were imposed on them. In general, be wary of more than four coupled differential equations. If you must use many differential equations, try to use parallel differential equations, in which the same fundamental equation is applied to each element of an array of values.
To help keep the system balanced, each differential equation should have a damping factor that must be empirically adjusted:
new value = old value + (driving factor / damping factor)
A small damping factor produces lively systems that bounce around wildly. A large damping factor yields sluggish systems that change slowly. Unfortunately, recourse to simple damping factors can backfire when a relationship of negative feedback exists between the "new value" and the "driving force". In this case, large damping inhibits the negative feedback, and one of the variables goes wild. The behavior of systems of differential equations is complex; I suggest that designers interested in these problems study the mathematics of overdamped, underdamped, and critically damped oscillatory systems. For more general information on solving systems of differential equations, any good textbook on numerical analysis will serve as a useful guide. Top
The application of all of these methods may well produce a game with some intelligence, but oneís expectations should not be too high. Even the expenditure of great effort is not enough to produce truly intelligent play; none of my three efforts to date play with an intelligence that is adequate, by itself, to tackle a human player. Indeed, they still need force ratios of at least two to one to stand up to the human player. Top
Another way to make up for the computerís lack of intelligence is to limit the amount of information available to the human player. If the human does not have the information to process, he cannot apply his superior processing power to the problem. This technique should not be applied to excess, for then the game is reduced to a game of chance. It can, nevertheless, equalize the odds. If the information is withheld in a reasonable context (e.g., the player must send out scouts), the restrictions on information seem natural.
Limited information provides a bonus: it can tickle the imagination of the player by suggesting without actually confirming. This only happens when the limitations on the information are artfully chosen. Randomly assigned gaps in information are confusing and frustrating rather than tantalizing. A naked woman can be beautiful to the male eye, but an artfully dressed woman can conceal her charms suggestively and thus appear even more alluring. The same woman randomly covered with miscellaneous bits of cloth would only look silly.
Another way to even balance between human and computer is through the pace of the game. The human may be smart, but the computer is much faster at performing simple computations. If the pace is fast enough, the human will not have enough time to apply his superior processing skills, and will be befuddled. This is a very easy technique to apply, so it comes as no surprise that it is very heavily used by designers of skill and action games.
I do not encourage the use of pace as an equalizing agent in computer games. Pace only succeeds by depriving the human player of the time he needs to invest a larger portion of himself into the game. Without that investment, the game can never offer a rich challenge. Pace does for computer games what the one-night stand does for romance. Like one-night stands, it will never go away. We certainly do not need to encourage it.
These four techniques for balancing computer games are never used in isolation; every game uses some combination of the four. Most games rely primarily on pace and quantity for balance, with very little intelligence or limited information. There is no reason why a game could not use all four techniques; indeed, this should make the game all the more successful, for, by using small amounts of each method, the game would not have to strain the limitations of each. The designer must decide the appropriate balance of each for the goals of the particular game. Top
Every game establishes a relationship between opponents that each player strives to exploit to maximum advantage. The fundamental architecture of this relationship plays a central role in the game. It defines the interactions available to the players and sets the tone of the game. Most computer games to date utilize very simple player-to-player relationships; this has limited their range and depth. A deeper understanding of player-to-player relationships will lead to more interesting games. Top
The simplest architecture establishes a symmetric relationship between the two players. Both possess the same properties, the same strengths and weaknesses. Symmetric games have the obviously desirable feature that they are automatically balanced. They tend to be much easier to program because the same processes are applied to each player. Finally, they are easier to learn and understand. Examples of symmetric games include COMBAT for the ATARI 2600, BASKETBALL, and DOG DAZE by Gray Chang.
Symmetric games suffer from a variety of weaknesses, the greatest of which is their relative simplicity. Any strategy that promises to be truly effective can and will be used by both sides simultaneously. In such a case, success is derived not from planning but from execution. Alternatively, success in the game turns on very fine details; chess provides an example an advantage of but a single pawn can be parlayed into a victory. Top
Because of the weaknesses of symmetric games, many games attempt to establish an asymmetric relationship between the opponents. Each player has a unique combination of advantages and disadvantages. The game designer must somehow balance the advantages so that both sides have the same likelihood of victory, given equal levels of skill. The simplest way of doing this is with plastic asymmetry. These games are formally symmetric, but the players are allowed to select initial traits according to some set of restrictions. For example, in the Avalon-Hill boardgame WIZARDíS QUEST, the players are each allowed the same number of territories at the beginning of the game, but they choose their territories in sequence. Thus, what was initially a symmetric relationship (each person has N territories) becomes an asymmetric one (player A has one combination of N territories while player B has a different combination). The asymmetry is provided by the players themselves at the outset of the game, so if the results are unbalanced, the player has no one to blame but himself.
Other games establish a more explicitly asymmetric relationship. Almost all solitaire computer games establish an asymmetric relationship between the computer player and the human player because the computer cannot hope to compete with the human in matters of intelligence. Thus, the human player is given resources that allow him to bring his superior planning power to bear, and the computer gets resources that compensate for its lack of intelligence. Top
The advantage of asymmetric games lies in the ability to build nontransitive or triangular relationships into the game. Transitivity is a well-defined mathematical property. In the context of games it is best illustrated with the rock-scissors-paper game. Two players play this game; each secretly selects one of the three pieces; they simultaneously announce and compare their choices. If both made the same choice the result is a draw and the game is repeated. If they make different choices, then rock breaks scissors, scissors cut paper, and paper enfolds rock. This relationship, in which each component can defeat one other and can be defeated by one other, is a nontransitive relationship; the fact that rock beats scissors and scissors beat paper does not mean that rock beats paper. Notice that this particular nontransitive relationship only produces clean results with three components. This is because each component only relates to two other components; it beats one and loses to the other. A rock-scissors-paper game with binary outcomes (win or lose) cannot be made with more than three components. One could be made with multiple components if several levels of victory (using a point system, perhaps) were admitted.
Nontransitivity is an interesting mathematical property, but it does not yield rich games so long as we hew to the strict mathematical meaning of the term. The value of this discussion lies in the generalization of the principle into less well-defined areas. I use the term "triangular" to describe such asymmetric relationships that extend the concepts of nontransitivity beyond its formal definition.
A simple example of a triangular relationship appears in the game BATTLEZONE. When a saucer appears, the player can pursue the saucer instead of an enemy tank. In such a case, there are three components: player, saucer, and enemy tank. The player pursues the saucer (side one) and allows the enemy tank to pursue him unmolested (side two). The third side of the triangle (saucer to enemy tank) is not directly meaningful to the human---the computer maneuvers the saucer to entice the human into a poor position. This example is easy to understand because the triangularity assumes a spatial form as well as a structural one.
Triangularity is most often implemented with mixed offensive-defensive relationships. In most conflict games, regardless of the medium of conflict, there will be offensive actions and defensive ones. Some games concentrate the bulk of one activity on one side, making one side the attacker and the other side the defender. This is a risky business, for it restricts the options available to each player. Itís hard to interact when your options are limited. Much more entertaining are games that mix offensive and defensive strategies for each player. This way, each player gets to attack and to defend. What is more important, players can trade off defensive needs against offensive opportunities. Triangular relationships automatically spring from such situations.
The essence of the value of triangularity lies in its indirection. A binary relationship makes direct conflict unavoidable; the antagonists must approach and attack each other through direct means. These direct approaches are obvious and expected; for this reason such games often degenerate into tedious exercises following a narrow script. A triangular relationship allows each player indirect methods of approach. Such an indirect approach always allows a far richer and subtler interaction. Top
Indirection is the essence of the value of triangularity to game design. Indirection is itself an important element to consider, for triangularity is only the most rudimentary expression of indirection. We can take the concept of indirection further than triangularity. Most games provide a direct relationship between opponents, as shown in the following diagram:
Since the opponent is the only obstacle facing the player, the simplest and most obvious resolution of the conflict is to destroy the opponent. This is why so many of these direct games are so violent. Triangularity, on the other hand, provides some indirection in the relationship:
With triangularity, each opponent can get at the other through the third party. The third party can be a passive agent, a weakly active one, or a full-fledged player. However, itís tough enough getting two people together for a game, much less three; therefore the third agent is often played by a computer-generated actor. An actor, as defined here, is not the same as an opponent. An actor follows a simple script; it has no guiding intelligence or purpose of its own. For example, the saucer in BATTLEZONE is an actor. Its script calls for it to drift around the battlefield without actively participating in the battle. Its function is distraction, a very weak role for an actor to play.
The actor concept allows us to understand a higher level of indirection, diagrammatically represented as follows:
In this arrangement, the players do not battle each other directly; they control actors who engage in direct conflict. A good example of this scheme is shown in the game ROBOTWAR by Muse Software. In this game, each player controls a killer robot. The player writes a detailed script (a short program) for his robot; this script will be used by the robot in a gladiatorial contest. The game thus removes the players from direct conflict and substitutes robot-actors as combatants. Each player is clearly identified with his own robot. This form of indirection is unsuccessful because the conflict itself remains direct; moreover, the player is removed from the conflict and forced to sit on the sidelines. I therefore see this form of indirection as an unsuccessful transitional stage.
The next level of indirection is shown in a very clever boardgame design by Jim Dunnigan, BATTLE FOR GERMANY. This game concerns the invasion of Germany in 1945. This was obviously an uneven struggle, for the Germans were simultaneously fighting the Russians in the east and the Anglo-Americans in the west. Uneven struggles make frustrating games. Dunniganís solution was to split both sides. One player controls the Russians and the west-front Germans; the other controls the Anglo-Americans and the east-front Germans. Thus, each player is both invader and defender: Neither player identifies directly with the invaders or the Germans; the two combatants have lost their identities and are now actors.
The highest expression of indirection I have seen is Dunniganís RUSSIAN CIVIL WAR game. This boardgame covers the civil war between the Reds and the Whites. Dunniganís brilliant approach was to completely dissolve any identification between player and combatant. Each player receives some Red armies and some White armies. During the course of the game, the player uses his Red armies to attack and destroy other playersí White armies. He uses his White armies to attack and destroy other playersí Red armies. The end of the game comes when one side, Red or White, is annihilated. The winner is then the player most identifiable with the victorious army (i.e., with the largest pile of loserís bodies and the smallest pile of winnerís bodies).
The indirection of this game is truly impressive. The two combatants are in no way identifiable with any individual until very late in the game. They are actors; Red and White battle without human manifestation even though they are played by human players. There is only one limitation to this design: the system requires more than two players to work effectively. Nevertheless, such highly indirect player-to-player architectures provide many fascinating opportunities for game design. Direct player-to-player relationships can only be applied to direct conflicts such as war. Direct conflicts tend to be violent and destructive; for this reason, society discourages direct conflicts. Yet conflict remains in our lives, taking more subtle and indirect forms. We fight our real-world battles with smiles, distant allies, pressure, and co-operation. Games with direct player-to-player relationships cannot hope to address real human interaction. Only indirect games offer any possibility of designing games that successfully explore the human condition. Top
As a player works with a game, s/he should show steady and smooth improvement. Beginners should be able to make some progress, intermediate people should get intermediate scores, and experienced players should got high scores. If we were to make a graph of a typical playerís score as a function of time spent with the game, that graph should show a curve sloping smoothly and steadily upward. This is the most desirable case.
A variety of other learning curves can arise; they reveal a great deal about the game. If a game has a curve that is relatively flat, we say that the game is hard to learn. If the curve is steep, we say the game is easy to learn. If the curve has a sharp jump in it, we say that there is just one trick to the game, mastery of which guarantees complete mastery of the game. If the game has many sharp jumps, we say that there are many tricks. A particularly bad case arises when the playerís score falls or levels off midway through the learning experience. This indicates that the game contains contradictory elements that confuse or distract the player at a certain level of proficiency. The ideal always slopes upward smoothly and steadily.
Games without smooth learning curves frustrate players by failing to provide them with reasonable opportunities for bettering their scores. Players feel that the game is either too hard, too easy, or simply arbitrary. Games with smooth learning curves challenge their players at all levels and encourage continued play by offering the prospect of new discoveries.
A smooth learning curve is worked into a game by providing a smooth progression from the beginnerís level to an expert level. This requires that the game designer create not one game but a series of related games. Each game must be intrinsically interesting and challenging to the level of player for which it is targeted. Ideally, the progression is automatic; the player starts at the beginnerís level and the advanced features are brought in as the computer recognizes proficient play. More commonly, the player must declare the level at which he desires to play. Top
Another important trait of any game is the illusion of winnability. If a game is to provide a continuing challenge to the player, it must also provide a continuing motivation to play. It must appear to be winnable to all players, the beginner and the expert. Yet, it must never be truly winnable or it will lose its appeal. This illusion is very difficult to maintain. Some games maintain it for the expert but never achieve it for the beginner; these games intimidate all but the most determined players. TEMPEST, for example, intimidates many players because it appears to be unwinnable. The most successful game in this respect is PAC-MAN, which appears winnable to most players, yet is never quite winnable.
The most important factor in the creation of the illusion of winnability is the cleanliness of the game. A dirty game intimidates its beginners with an excess of details. The beginner never overcomes the inhibiting suspicion that somewhere in the game lurks a "gotcha". By contrast, a clean game encourages all players to experiment with the game as it appears.
Another key factor in maintaining the illusion of winnability arises from a careful analysis of the source of player failure. In every game the player is expected to fail often. What trips up the player? If the player believes that his failure arises from some flaw in the game or its controls, he becomes frustrated and angry with what he rightly judges to be an unfair and unwinnable situation. If the player believes that his failure arises from his own limitations, but judges that the game expects or requires superhuman performance, the player again rejects the game as unfair and unwinnable. But if the player believes failures to be attributable to correctable errors on his own part, he believes the game to be winnable and plays on in an effort to master the game. When the player falls, he should slap himself gently and say, "That was a silly mistake!" Top
In this chapter I have described a number of design methods and ideals that I have used in developing several games. Methods and ideals should not be used in grab bag fashion, for taken together they constitute the elusive element we call "technique". Technique is part of an artistís signature, as important as theme. When we listen to Beethovenís majestic Fifth Symphony, or the rapturous Sixth, or the ecstatic Ninth, we recognize in all the identifying stamp of Beethovenís masterful technique. If you would be a compute game designer, you must establish and develop your own technique.