Leaked source code of windows server 2003
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1369 lines
50 KiB

  1. #Time-stamp: "2001-02-23 20:09:47 MST" -*-Text-*-
  2. # This document contains text in Perl "POD" format.
  3. # Use a POD viewer like perldoc or perlman to render it.
  4. =head1 NAME
  5. HTML::Tree::AboutTrees -- article on tree-shaped data structures in Perl
  6. =head1 SYNOPSIS
  7. # This an article, not a module.
  8. =head1 DESCRIPTION
  9. The following article by Sean M. Burke first appeared in I<The Perl
  10. Journal> #18 and is copyright 2000 The Perl Journal. It appears
  11. courtesy of Jon Orwant and The Perl Journal. This document may be
  12. distributed under the same terms as Perl itself.
  13. =head1 Trees
  14. -- Sean M. Burke
  15. =over
  16. "AaaAAAaauugh! Watch out for that tree!"
  17. -- I<George of the Jungle theme>
  18. =back
  19. Perl's facility with references, combined with its automatic management of
  20. memory allocation, makes it straightforward to write programs that store data
  21. in structures of arbitrary form and complexity.
  22. But I've noticed that many programmers, especially those who started out
  23. with more restrictive languages, seem at home with complex but uniform
  24. data structures -- N-dimensional arrays, or more struct-like things like
  25. hashes-of-arrays(-of-hashes(-of-hashes), etc.) -- but they're often uneasy
  26. with buliding more freeform, less tabular structures, like
  27. tree-shaped data structures.
  28. But trees are easy to build and manage in Perl, as I'll demonstrate
  29. by showing off how the HTML::Element class manages elements in an HTML
  30. document tree, and by walking you through a from-scratch implementation
  31. of game trees. But first we need to nail down what we mean by a "tree".
  32. =head2 Socratic Dialogues: "What is a Tree?"
  33. My first brush with tree-shaped structures was in linguistics classes,
  34. where tree diagrams are used to describe the syntax underlying natural
  35. language sentences. After learning my way around I<those> trees, I
  36. started to wonder -- are what I'm used to calling "trees" the same as what
  37. programmers call "trees"? So I asked lots of helpful and patient
  38. programmers how they would define a tree. Many replied with a
  39. answer in jargon that they could not really explain (understandable,
  40. since explaining things, especially defining things, is harder
  41. than people think):
  42. =over
  43. -- So what I<is> a "tree", a tree-shaped data structure?
  44. -- A tree is a special case of an acyclic directed graph!
  45. -- What's a "graph"?
  46. -- Um... lines... and... you draw it... with... arcs! nodes! um...
  47. =back
  48. The most helpful were folks who couldn't explain directly, but with
  49. whom I could get into a rather Socratic dialog (where I<I> asked the
  50. half-dim half-earnest questions), often with much doodling of
  51. illustrations...
  52. Question: so what's a tree?
  53. Answer: A tree is a collection of nodes that are linked together in a,
  54. well, tree-like way! Like this I<[drawing on a napkin]:>
  55. A
  56. / \
  57. B C
  58. / | \
  59. D E F
  60. Q: So what do these letters represent?
  61. A: Each is a different node, a bunch of data. Maybe C is a
  62. bunch of data that stores a number, maybe a hash table, maybe nothing
  63. at all besides the fact that it links to D, E, and F (which are other
  64. nodes).
  65. Q: So what're the lines between the nodes?
  66. A: Links. Also called "arcs". They just symbolize the fact that each
  67. node holds a list of nodes it links to.
  68. Q: So what if I draw nodes and links, like this...
  69. B -- E
  70. / \ / \
  71. A C
  72. \ /
  73. E
  74. Is that still a tree?
  75. A: No, not at all. There's a lot of un-treelike things about that.
  76. First off, E has a link coming off of it going into nowhere. You can't have
  77. a link to nothing -- you can only link to another node. Second off, I
  78. don't know what that sideways link between B and E means...
  79. Q: Okay, let's work our way up from something simpler. Is this a tree...?
  80. A
  81. A: Yes, I suppose. It's a tree of just one node.
  82. Q: And how about...
  83. A
  84. B
  85. A: No, you can't just have nodes floating there, unattached.
  86. Q: Okay, I'll link A and B. How's this?
  87. A
  88. |
  89. B
  90. A: Yup, that's a tree. There's a node A, and a node B, and they're linked.
  91. Q: How is that tree any different from this one...?
  92. B
  93. |
  94. A
  95. A: Well, in both cases A and B are linked. But it's in a different
  96. direction.
  97. Q: Direction? What does the direction mean?
  98. A: Well, it depends what the tree represents. If it represents a
  99. categorization, like this:
  100. citrus
  101. / | \
  102. orange lemon kumquat ...
  103. then you mean to say that oranges, lemons, kumquats, etc., are a kind of
  104. citrus. But if you drew it upside down, you'd be saying, falsely, that
  105. citrus is a kind of kumquat, a kind of lemon, and a kind of orange.
  106. If the tree represented cause-and-effect (or at least what situations
  107. could follow others), or represented what's a part of what, you
  108. wouldn't want to get those backwards, either. So with the nodes you
  109. draw together on paper, one has to be over the other, so you can tell which
  110. way the relationship in the tree works.
  111. Q: So are these two trees the same?
  112. A A
  113. / \ / \
  114. B C B \
  115. C
  116. A: Yes, although by convention we often try to line up things in the
  117. same generation, like it is in the diagram on the left.
  118. Q: "generation"? This is a family tree?
  119. A: No, not unless it's a family tree for just yeast cells or something
  120. else that reproduces asexually.
  121. But for sake of having lots of terms to use, we just pretend that links
  122. in the tree represent the "is a child of" relationship, instead of "is a
  123. kind of" or "is a part of", or "could result from", or whatever the real
  124. relationship is. So we get to borrow a lot of kinship words for
  125. describing trees -- B and C are "children" (or "daughters") of A; A is
  126. the "parent" (or "mother") of B and C. Node C is a "sibling" (or
  127. "sister") of node C; and so on, with terms like "descedants" (a node's
  128. children, children's children, etc.), and "generation" (all the
  129. nodes at the same "level" in the tree, i.e., are either all
  130. grandchildren of the top node, or all great-grand-children, etc.), and
  131. "lineage" or "ancestors" (parents, and parent's parents, etc., all the
  132. way to the topmost node).
  133. So then we get to express rules in terms like "B<A node cannot have more
  134. than one parent>", which means that this is not a valid tree:
  135. A
  136. / \
  137. B C
  138. \ /
  139. E
  140. And: "B<A node can't be its own parent>", which excludes this looped-up
  141. connection:
  142. /\
  143. A |
  144. \/
  145. Or, put more generally: "B<A node can't be its own ancestor>", which
  146. excludes the above loop, as well as the one here:
  147. /\
  148. Z |
  149. / |
  150. A |
  151. / \ |
  152. B C |
  153. \/
  154. That tree is excluded because A is a child of Z, and Z is a child of C,
  155. and C is a child of A, which means A is its own great-grandparent. So
  156. this whole network can't be a tree, because it breaks the sort of
  157. meta-rule: B<once any node in the supposed tree breaks the rules for
  158. trees, you don't have a tree anymore.>
  159. Q: Okay, now, are these two trees the same?
  160. A A
  161. / | \ / | \
  162. B C D D C B
  163. A: It depends whether you're basing your concept of trees on each node
  164. having a set (unordered list) of children, or an (ordered) list of
  165. children. It's a question of whether ordering is important for what
  166. you're doing. With my diagram of citrus types, ordering isn't
  167. important, so these tree diagrams express the same thing:
  168. citrus
  169. / | \
  170. orange lemon kumquat
  171. citrus
  172. / | \
  173. kumquat orange lemon
  174. because it doesn't make sense to say that oranges are "before" or
  175. "after" kumquats in the whole botanical scheme of things. (Unless, of
  176. course, you I<are> using ordering to mean something, like a degree of
  177. genetic similarity.)
  178. But consider a tree that's a diagram of what steps are comprised in an
  179. activity, to some degree of specificity:
  180. make tea
  181. / | \
  182. pour infuse serve
  183. hot water / \
  184. in cup/pot / \
  185. add let
  186. tea sit
  187. leaves
  188. This means that making tea consists of putting hot water in a cup or
  189. put, infusing it (which itself consists of adding tea leaves and letting
  190. it sit), then serving it -- I<in that order>. If you serve an empty
  191. dry pot (sipping from empty cups, etc.), let it sit, add tea leaves,
  192. and pour in hot water, then what you're doing is performance art, not
  193. tea preparation:
  194. perfomance
  195. art
  196. / | \
  197. serve infuse pour
  198. / \ hot water
  199. / \ in cup/pot
  200. let add
  201. sit tea
  202. leaves
  203. Except for my having renamed the root, this tree is the same as
  204. the making-tea tree as far as what's under what, but it differs
  205. in order, and what the tree means makes the order important.
  206. Q: Wait -- "root"? What's a root?
  207. A: Besides kinship terms like "mother" and "daugher", the jargon for
  208. tree parts also has terms from real-life tree parts: the part that
  209. everything else grows from is called the root; and nodes that don't
  210. have nodes attached to them (i.e., childless nodes) are called
  211. "leaves".
  212. Q: But you've been drawing all your trees with the root at the top and
  213. leaves at the bottom.
  214. A: Yes, but for some reason, that's the way everyone seems to think of
  215. trees. They can draw trees as above; or they can draw them sort of
  216. sideways with indenting representing what nodes are children of what:
  217. * make tea
  218. * pour hot water in cup/pot
  219. * infuse
  220. * add tea leaves
  221. * let sit
  222. * serve
  223. ...but folks almost never seem to draw trees with the root at the
  224. bottom. So imagine it's based on spider plant in a hanging pot.
  225. Unfortunately, spider plants I<aren't> botanically trees, they're
  226. plants; but "spider plant diagram" is rather a mouthful, so let's just
  227. call them trees.
  228. =head2 Trees Defined Formally
  229. In time, I digested all these assorted facts about programmers' ideas of
  230. trees (which turned out to be just a more general case of linguistic
  231. ideas of trees) into a single rule:
  232. * A node is an item that contains ("is over", "is parent of", etc.)
  233. zero or more other nodes.
  234. From this you can build up formal definitions for useful terms, like so:
  235. * A node's B<descendants> are defined as all its children, and all
  236. their children, and so on. Or, stated recursively: a node's
  237. descendants are all its children, and all its children's descendants.
  238. (And if it has no children, it has no descendants.)
  239. * A node's B<ancestors> consist of its parent, and its parent's
  240. parent, etc, up to the root. Or, recursively: a node's ancestors
  241. consist of its parent and its parent's ancestors. (If it has no parent,
  242. it has no ancestors.)
  243. * A B<tree> is a root node and all the root's descendants.
  244. And you can add a proviso or two to clarify exactly what I impute to the
  245. word "other" in "other nodes":
  246. * A node cannot contain itself, or contain any node that contains it,
  247. etc. Looking at it the other way: a node cannot be its own parent or
  248. ancestor.
  249. * A node can be root (i.e., no other node contains it) or can be
  250. contained by only one parent; no node can be the child of two or more
  251. parents.
  252. Add to this the idea that children are sometimes ordered, and sometimes
  253. not, and that's about all you need to know about defining what a tree
  254. is. From there it's a matter of using them.
  255. =head2 Markup Language Trees: HTML-Tree
  256. While not I<all> markup languages are inherently tree-like, the
  257. best-known family of markup languages, HTML, SGML, and XML, are about
  258. as tree-like as you can get. In these languages, a document consists
  259. of elements and character data in a tree structure where
  260. there is one root element, and elements can contain either other
  261. elements, or character data.
  262. =over
  263. Footnote:
  264. For sake of simplicity, I'm glossing over
  265. comments (<!-- ... -->), processing instructions (<?xml
  266. version='1.0'>), and declarations (<!ELEMENT ...>, <!DOCTYPE ...>).
  267. And I'm not bothering to distinguish entity references
  268. (&lt;, &#64;) or CDATA sections (<![CDATA[ ...]]>) from normal text.
  269. =back
  270. For example, consider this HTML document:
  271. <html lang="en-US">
  272. <head>
  273. <title>
  274. Blank Document!
  275. </title>
  276. </head>
  277. <body bgcolor="#d010ff">
  278. I've got
  279. <em>
  280. something to saaaaay
  281. </em>
  282. !
  283. </body>
  284. </html>
  285. I've indented this to point out what nodes (elements or text items) are
  286. children of what, with each node on a line of its own.
  287. The HTML::TreeBuilder module (in the CPAN distribution HTML-Tree)
  288. does the work of taking HTML source and
  289. building in memory the tree that the document source represents.
  290. =over
  291. Footnote: it requires the HTML::Parser module, which tokenizes the
  292. source -- i.e., identifies each tag, bit of text, comment, etc.
  293. =back
  294. The trees structures that it builds represent bits of text with
  295. normal Perl scalar string values; but elements are represented with
  296. objects -- that is, chunks of data that belong to a
  297. class (in this case, HTML::Element), a class that provides methods
  298. (routines) for accessing the pieces of data in each element, and
  299. otherwise doing things with elements. (See my article in TPJ#17 for a
  300. quick explanation of objects, the POD document C<perltoot> for a longer
  301. explanation, or Damian Conway's excellent book I<Object-Oriented Perl>
  302. for the full story.)
  303. Each HTML::Element object contains a number of pieces of data:
  304. * its element name ("html", "h1", etc., accessed as $element->tag)
  305. * a list of elements (or text segments) that it contains, if any
  306. (accessed as $element->content_list or $element->content, depending on
  307. whether you want a list, or an arrayref)
  308. * what element, if any, contains it (accessed as $element->parent)
  309. * and any SGML attributes that the element has,
  310. such as C<lang="en-US">, C<align="center">, etc. (accessed as
  311. $element->attr('lang'), $element->attr('center'), etc.)
  312. So, for example, when HTML::TreeBuilder builds the tree for the above
  313. HTML document source, the object for the "body" element has these pieces of
  314. data:
  315. * element name: "body"
  316. * nodes it contains:
  317. the string "I've got "
  318. the object for the "em" element
  319. the string "!"
  320. * its parent:
  321. the object for the "html" element
  322. * bgcolor: "#d010ff"
  323. Now, once you have this tree of objects, almost anything you'd want to
  324. do with it starts with searching the tree for some bit of information
  325. in some element.
  326. Accessing a piece of information in, say, a hash of hashes of hashes,
  327. is straightforward:
  328. $password{'sean'}{'sburke1'}{'hpux'}
  329. because you know that all data points in that structure are accessible
  330. with that syntax, but with just different keys. Now, the "em" element
  331. in the above HTML tree does happen to be accessible
  332. as the root's child #1's child #1:
  333. $root->content->[1]->content->[1]
  334. But with trees, you typically don't know the exact location (via
  335. indexes) of the data you're looking for. Instead, finding what you want
  336. will typically involve searching through the tree, seeing if every node is
  337. the kind you want. Searching the whole tree is simple enough -- look at
  338. a given node, and if it's not what you want, look at its children, and
  339. so on. HTML-Tree provides several methods that do this for you, such as
  340. C<find_by_tag_name>, which returns the elements (or the first element, if
  341. called in scalar context) under a given node (typically the root) whose
  342. tag name is whatever you specify.
  343. For example, that "em" node can be found as:
  344. my $that_em = $root->find_by_tag_name('em');
  345. or as:
  346. @ems = $root->find_by_tag_name('em');
  347. # will only have one element for this particular tree
  348. Now, given an HTML document of whatever structure and complexity, if you
  349. wanted to do something like change every
  350. =over
  351. E<lt>emE<gt>I<stuff>E<lt>/emE<gt>
  352. =back
  353. to
  354. =over
  355. E<lt>em class="funky"E<gt>
  356. B<E<lt>bE<gt>[-E<lt>/bE<gt>>
  357. I<stuff>
  358. B<E<lt>bE<gt>-]E<lt>/bE<gt>>
  359. E<lt>/emE<gt>
  360. =back
  361. the first step is to frame this operation in terms of what you're doing
  362. to the tree. You're changing this:
  363. em
  364. |
  365. ...
  366. to this:
  367. em
  368. / | \
  369. b ... b
  370. | |
  371. "[-" "-]"
  372. In other words, you're finding all elements whose tag name is "em",
  373. setting its class attribute to "funky", and adding one child to the start
  374. of its content list -- a new "b" element
  375. whose content is the text string "[-" -- and one to the end of its
  376. content list -- a new "b" element whose content is the text string "-]".
  377. Once you've got it in these terms, it's just a matter of running to the
  378. HTML::Element documentation, and coding this up with calls to the
  379. appropriate methods, like so:
  380. use HTML::Element 1.53;
  381. use HTML::TreeBuilder 2.96;
  382. # Build the tree by parsing the document
  383. my $root = HTML::TreeBuilder->new;
  384. $root->parse_file('whatever.html'); # source file
  385. # Now make new nodes where needed
  386. foreach my $em ($root->find_by_tag_name('em')) {
  387. $em->attr('class', 'funky'); # Set that attribute
  388. # Make the two new B nodes
  389. my $new1 = HTML::Element->new('b');
  390. my $new2 = HTML::Element->new('b');
  391. # Give them content (they have none at first)
  392. $new1->push_content('[-');
  393. $new2->push_content('-]');
  394. # And put 'em in place!
  395. $em->unshift_content($new1);
  396. $em->push_content($new2);
  397. }
  398. print
  399. "<!-- Looky see what I did! -->\n",
  400. $root->as_HTML(), "\n";
  401. The class HTML::Element provides just about every method I can image you
  402. needing, for manipulating trees made of HTML::Element objects. (And
  403. what it doesn't directly provide, it will give you the components to build
  404. it with.)
  405. =head2 Building Your Own Trees
  406. Theoretically, any tree is pretty much like any other tree, so you could
  407. use HTML::Element for anything you'd ever want to do with tree-arranged
  408. objects. However, as its name implies, HTML::Element is basically
  409. I<for> HTML elements; it has lots of features that make sense only for
  410. HTML elements (like the idea that every element must have a tag-name).
  411. And it lacks some features that might be useful for general applications
  412. -- such as any sort of checking to make sure that you're not trying to
  413. arrange objects in a non-treelike way. For a general-purpose tree class
  414. that does have such features, you can use Tree::DAG_Node, also available
  415. from CPAN.
  416. However, if your task is simple enough, you might find it overkill to
  417. bother using Tree::DAG_Node. And, in any case, I find that the best
  418. way to learn how something works is to implement it (or something like
  419. it, but simpler) yourself. So I'll here discuss how you'd implement a tree
  420. structure, I<without> using any of the existing classes for tree nodes.
  421. =head2 Implementation: Game Trees for Alak
  422. Suppose that the task at hand is to write a program that can play
  423. against a human opponent at a strategic board game (as opposed to a
  424. board game where there's an element of chance). For most such games, a
  425. "game tree" is an essential part of the program (as I will argue,
  426. below), and this will be our test case for implementing a tree
  427. structure from stratch.
  428. For sake of simplicity, our game is not chess or backgammon, but instead
  429. a much simpler game called Alak. Alak was invented by the mathematician
  430. A. K. Dewdney, and described in his 1984 book I<Planiverse>. The rules
  431. of Alak are simple:
  432. =over
  433. Footnote: Actually, I'm describing only my
  434. interpetation of the rules Dewdney describes in I<Planiverse>. Many
  435. other interpretations are possible.
  436. =back
  437. * Alak is a two-player game played on a one-dimensional board with
  438. eleven slots on it. Each slot can hold at most one piece at a time.
  439. There's two kinds of pieces, which I represent here as "x" and "o" --
  440. x's belong to one player (called X), o's to the other (called O).
  441. * The initial configuration of the board is:
  442. xxxx___oooo
  443. For sake of the article, the slots are numbered from 1 (on the left) to
  444. 11 (on the right), and X always has the first move.
  445. * The players take turns moving. At each turn, each player can move
  446. only one piece, once. (This unlike checkers, where you move one piece
  447. per move but get to keep moving it if you jump an your opponent's
  448. piece.) A player cannot pass up on his turn. A player can move any one
  449. of his pieces to the next unoccupied slot to its right or left, which
  450. may involve jumping over occupied slots. A player cannot move a piece
  451. off the side of the board.
  452. * If a move creates a pattern where the opponent's pieces are
  453. surrounded, on both sides, by two pieces of the mover's color (with no
  454. intervening unoccupied blank slot), then those surrounded pieces are
  455. removed from the board.
  456. * The goal of the game is to remove all of your opponent's pieces, at
  457. which point the game ends. Removing all-but-one ends the game as
  458. well, since the opponent can't surround you with one piece, and so will
  459. always lose within a few moves anyway.
  460. Consider, then, this rather short game where X starts:
  461. xxxx___oooo
  462. ^ Move 1: X moves from 3 (shown with caret) to 5
  463. (Note that any of X's pieces could move, but
  464. that the only place they could move to is 5.)
  465. xx_xx__oooo
  466. ^ Move 2: O moves from 9 to 7.
  467. xx_xx_oo_oo
  468. ^ Move 3: X moves from 4 to 6.
  469. xx__xxoo_oo
  470. ^ Move 4: O (stupidly) moves from 10 to 9.
  471. xx__xxooo_o
  472. ^ Move 5: X moves from 5 to 10, making the board
  473. "xx___xoooxo". The three o's that X just
  474. surrounded are removed.
  475. xx___x___xo
  476. O has only one piece, so has lost.
  477. Now, move 4 could have gone quite the other way:
  478. xx__xxoo_oo
  479. Move 4: O moves from 8 to 4, making the board
  480. "xx_oxxo__oo". The surrounded x's are removed.
  481. xx_o__o__oo
  482. ^ Move 5: X moves from 1 to 2.
  483. _xxo__o__oo
  484. ^ Move 6: O moves from 7 to 6.
  485. _xxo_o___oo
  486. ^ Move 7: X moves from 2 to 5, removing the o at 4.
  487. __x_xo___oo
  488. ...and so on.
  489. To teach a computer program to play Alak (as player X, say), it needs to
  490. be able to look at the configuration of the board, figure out what moves
  491. it can make, and weigh the benefit or costs, immediate or eventual, of
  492. those moves.
  493. So consider the board from just before move 3, and figure all the possible
  494. moves X could make. X has pieces in slots 1, 2, 4, and 5. The leftmost
  495. two x's (at 1 and 2) are up against the end of the board, so they
  496. can move only right. The other two x's (at 4 and 5) can move either
  497. right or left:
  498. Starting board: xx_xx_oo_oo
  499. moving 1 to 3 gives _xxxx_oo_oo
  500. moving 2 to 3 gives x_xxx_oo_oo
  501. moving 4 to 3 gives xxx_x_oo_oo
  502. moving 5 to 3 gives xxxx__oo_oo
  503. moving 4 to 6 gives xx__xxoo_oo
  504. moving 5 to 6 gives xx_x_xoo_oo
  505. For the computer to decide which of these is the best move to make, it
  506. needs to quantify the benefit of these moves as a number -- call that
  507. the "payoff". The payoff of a move can be figured as just the number
  508. of x pieces removed by the most recent move, minus the nubmer of o
  509. pieces removed by the mots recent move. (It so happens that the rules
  510. of the game mean that no move can delete both o's and x's, but the
  511. formula still applies.) Since none of these moves removed any pieces,
  512. all these moves have the same immediate payoff: 0.
  513. Now, we could race ahead and write an Alak-playing program that could
  514. use the immediate payoff to decide which is the best move to make.
  515. And when there's more than one best move (as here, where all the moves
  516. are equally good), it could choose randomly between the good
  517. alternatives. This strategy is simple to implement; but it makes for a
  518. very dumb program. Consider what O's response to each of the potential
  519. moves (above) could be. Nothing immediately suggests itself for the
  520. first four possibilities (X having moved something to position 3), but
  521. either of the last two (illustrated below) are pretty perilous,
  522. because in either case O has the obvious option (which he would be
  523. foolish to pass up) of removing x's from the board:
  524. xx_xx_oo_oo
  525. ^ X moves 4 to 6.
  526. xx__xxoo_oo
  527. ^ O moves 8 to 4, giving "xx_oxxo__oo". The two
  528. surrounded x's are removed.
  529. xx_o__o__oo
  530. or
  531. xx_xx_oo_oo
  532. ^ X moves 5 to 6.
  533. xx_x_xoo_oo
  534. ^ O moves 8 to 5, giving "xx_xoxo__oo". The one
  535. surrounded x is removed.
  536. xx_xo_o__oo
  537. Both contingencies are quite bad for X -- but this is not captured
  538. by the fact that they start out with X thinking his move will be
  539. harmless, having a payoff of zero.
  540. So what's needed is for X to think I<more> than one step ahead -- to
  541. consider not merely what it can do in this move, and what the payoff
  542. is, but to consider what O might do in response, and the
  543. payoff of those potential moves, and so on with X's possible responses
  544. to those cases could be. All these possibilities form a game tree -- a
  545. tree where each node is a board, and its children are successors of
  546. that node -- i.e., the boards that could result from every move
  547. possible, given the parent's board.
  548. But how to represent the tree, and how to represent the nodes?
  549. Well, consider that a node holds several pieces of data:
  550. 1) the configuration of the board, which, being nice and simple and
  551. one-dimensional, can be stored as just a string, like "xx_xx_oo_oo".
  552. 2) whose turn it is, X or O. (Or: who moved last, from which we can
  553. figure whose turn it is).
  554. 3) the successors (child nodes).
  555. 4) the immediate payoff of having moved to this board position from its
  556. predecessor (parent node).
  557. 5) and what move gets us from our predecessor node to here. (Granted,
  558. knowing the board configuration before and after the move, it's easy to
  559. figure out the move; but it's easier still to store it as one is
  560. figuring out a node's successors.)
  561. 6) whatever else we might want to add later.
  562. These could be stored equally well in an array or in a hash, but it's my
  563. experience that hashes are best for cases where you have more than just
  564. two or three bits of data, or especially when you might need to add new
  565. bits of data. Moreover, hash key names are mnemonic --
  566. $node->{'last_move_payoff'} is plain as day, whereas it's not so easy having to
  567. remember with an array that $node->[3] is where you decided to keep the
  568. payoff.
  569. =over
  570. Footnote:
  571. Of course, there are ways around that problem: just swear you'll never
  572. use a real numeric index to access data in the array, and instead use
  573. constants with mnemonic names:
  574. use strict;
  575. use constant idx_PAYOFF => 3;
  576. ...
  577. $n->[idx_PAYOFF]
  578. Or use a pseudohash. But I prefer to keep it simple, and use a hash.
  579. These are, incidentally, the same arguments that
  580. people weigh when trying to decide whether their object-oriented
  581. modules should be based on blessed hashes, blessed arrays, or what.
  582. Essentially the only difference here is that we're not blessing our
  583. nodes or talking in terms of classes and methods.
  584. [end footnote]
  585. =back
  586. So, we might as well represent nodes like so:
  587. $node = { # hashref
  588. 'board' => ...board string, e.g., "xx_x_xoo_oo"
  589. 'last_move_payoff' => ...payoff of the move
  590. that got us here.
  591. 'last_move_from' => ...the start...
  592. 'last_move_to' => ...and end point of the move
  593. that got us here. E.g., 5 and 6,
  594. representing a move from 5 to 6.
  595. 'whose_turn' => ...whose move it then becomes.
  596. just an 'x' or 'o'.
  597. 'successors' => ...the successors
  598. };
  599. Note that we could have a field called something like 'last_move_who' to
  600. denote who last moved, but since turns in Alak always alternate (and
  601. no-one can pass), storing whose move it is now I<and> who last moved is
  602. redundant -- if X last moved, it's O turn now, and vice versa.
  603. I chose to have a 'whose_turn' field instead of a 'last_move_who', but
  604. it doesn't really matter. Either way, we'll end up inferring one from
  605. the other at several points in the program.
  606. When we want to store the successors of a node, should we use an array
  607. or a hash? On the one hand, the successors to $node aren't essentially
  608. ordered, so there's no reason to use an array per se; on the other hand,
  609. if we used a hash, with successor nodes as values, we don't have
  610. anything particularly meaningful to use as keys. (And we can't use the
  611. successors themselves as keys, since the nodes are referred to by
  612. hash references, and you can't use a reference as a hash key.) Given no
  613. particularly compelling reason to do otherwise, I choose to just use an
  614. array to store all a node's successors, although the order is never
  615. actually used for anything:
  616. $node = {
  617. ...
  618. 'successors' => [ ...nodes... ],
  619. ...
  620. };
  621. In any case, now that we've settled on what should be in a node,
  622. let's make a little sample tree out of a few nodes and see what we can
  623. do with it:
  624. # Board just before move 3 in above game
  625. my $n0 = {
  626. 'board' => 'xx_xx_oo_oo',
  627. 'last_move_payoff' => 0,
  628. 'last_move_from' => 9,
  629. 'last_move_to' => 7,
  630. 'whose_turn' => 'x',
  631. 'successors' => [],
  632. };
  633. # And, for now, just two of the successors:
  634. # X moves 4 to 6, giving xx__xxoo_oo
  635. my $n1 = {
  636. 'board' => 'xx__xxoo_oo',
  637. 'last_move_payoff' => 0,
  638. 'last_move_from' => 4,
  639. 'last_move_to' => 6,
  640. 'whose_turn' => 'o',
  641. 'successors' => [],
  642. };
  643. # or X moves 5 to 6, giving xx_x_xoo_oo
  644. my $n2 = {
  645. 'board' => 'xx_x_xoo_oo',
  646. 'last_move_payoff' => 0,
  647. 'last_move_from' => 5,
  648. 'last_move_to' => 6,
  649. 'whose_turn' => 'o',
  650. 'successors' => [],
  651. };
  652. # Now connect them...
  653. push @{$n0->{'successors'}}, $n1, $n2;
  654. =head2 Digression: Links to Parents
  655. In comparing what we store in an Alak game tree node to what
  656. HTML::Element stores in HTML element nodes, you'll note one big
  657. difference: every HTML::Element node contains a link to its parent,
  658. whereas we don't have our Alak nodes keeping a link to theirs.
  659. The reason this can be an important difference is because it can affect
  660. how Perl knows when you're not using pieces of memory anymore.
  661. Consider the tree we just built, above:
  662. node 0
  663. / \
  664. node 1 node 2
  665. There's two ways Perl knows you're using a piece of memory:
  666. 1) it's memory that belongs directly to a variable (i.e., is necessary
  667. to hold that variable's value, or valueI<s> in the case of a hash or
  668. array), or 2) it's a piece of memory that something holds a reference
  669. to. In the above code, Perl knows that the hash for node 0 (for board
  670. "xx_xx_oo_oo") is in use because something (namely, the variable
  671. C<$n0>) holds a reference to it. Now, even if you followed the above
  672. code with this:
  673. $n1 = $n2 = 'whatever';
  674. to make your variables C<$n1> and C<$n2> stop holding references to
  675. the hashes for the two successors of node 0, Perl would still know that
  676. those hashes are still in use, because node 0's successors array holds
  677. a reference to those hashes. And Perl knows that node 0 is still in
  678. use because something still holds a reference to it. Now, if you
  679. added:
  680. my $root = $n0;
  681. This would change nothing -- there's just be I<two> things holding a
  682. reference to the node 0 hash, which in turn holds a reference to the
  683. node 1 and node 2 hashes. And if you then added:
  684. $n0 = 'stuff';
  685. still nothing would change, because something (C<$root>) still holds a
  686. reference to the node 0 hash. But once I<nothing> holds a reference to
  687. the node 0 hash, Perl will know it can destroy that hash (and reclaim
  688. the memory for later use, say), and once it does that, nothing will hold
  689. a reference to the node 1 or the node 2 hashes, and those will be
  690. destroyed too.
  691. But consider if the node 1 and node 2 hashes each had an attribute
  692. "parent" (or "predecessor") that held a reference to node 0. If your
  693. program stopped holding a reference to the node 0 hash, Perl could
  694. I<not> then say that I<nothing> holds a reference to node 0 -- because
  695. node 1 and node 2 still do. So, the memory for nodes 0, 1, and 2 would
  696. never get reclaimed (until your program ended, at which point Perl
  697. destroys I<everything>). If your program grew and discarded lots of
  698. nodes in the game tree, but didn't let Perl know it could reclaim their
  699. memory, your program could grow to use immense amounts of memory --
  700. never a nice thing to have happen. There's three ways around this:
  701. 1) When you're finished with a node, delete the reference each of its
  702. children have to it (in this case, deleting $n1->{'parent'}, say).
  703. When you're finished with a whole tree, just go through the whole tree
  704. erasing links that children have to their children.
  705. 2) Reconsider whether you really need to have each node hold a reference
  706. to its parent. Just not having those links will avoid the whole
  707. problem.
  708. 3) use the WeakRef module with Perl 5.6 or later. This allows you to
  709. "weaken" some references (like the references that node 1 and 2 could
  710. hold to their parent) so that they don't count when Perl goes asking
  711. whether anything holds a reference to a given piece of memory. This
  712. wonderful new module eliminates the headaches that can often crop up
  713. with either of the two previous methods.
  714. It so happens that our Alak program is simple enough that we don't need
  715. for our nodes to have links to their parents, so the second solution is
  716. fine. But in a more advanced program, the first or third solutions
  717. might be unavoidable.
  718. =head2 Recursively Printing the Tree
  719. I don't like working blind -- if I have any kind of a complex data
  720. structure in memory for a program I'm working on, the first thing I do
  721. is write something that can dump that structure to the screen so I can
  722. make sure that what I I<think> is in memory really I<is> what's in
  723. memory. Now, I could just use the "x" pretty-printer command in Perl's
  724. interactive debugger, or I could have the program use the
  725. C<Data::Dumper> module. But in this case, I think the output from those
  726. is rather too verbose. Once we have trees with dozens of nodes in them,
  727. we'll really want a dump of the tree to be as concise as possible,
  728. hopefully just one line per node. What I'd like is something that can
  729. print C<$n0> and its successors (see above) as something like:
  730. xx_xx_oo_oo (O moved 9 to 7, 0 payoff)
  731. xx__xxoo_oo (X moved 4 to 6, 0 payoff)
  732. xx_x_xoo_oo (X moved 5 to 6, 0 payoff)
  733. A subroutine to print a line for a given node, and then do that again for
  734. each successor, would look something like:
  735. sub dump_tree {
  736. my $n = $_[0]; # "n" is for node
  737. print
  738. ...something expressing $n'n content...
  739. foreach my $s (@{$n->{'successors'}}) {
  740. # "s for successor
  741. dump($s);
  742. }
  743. }
  744. And we could just start that out with a call to C<dump_tree($n0)>.
  745. Since this routine...
  746. =over
  747. Footnote:
  748. I first wrote this routine starting out with "sub dump {". But when
  749. I tried actually calling C<dump($n0)>, Perl would dump core! Imagine
  750. my shock when I discovered that this is absolutely to be expected --
  751. Perl provides a built-in function called C<dump>, the purpose of which
  752. is to, yes, make Perl dump core. Calling our routine "dump_tree"
  753. instead of "dump" neatly avoids that problem.
  754. =back
  755. ...does its work (dumping the subtree at and under the
  756. given node) by calling itself, it's B<recursive>. However, there's a
  757. special term for this kind of recursion across a tree: traversal. To
  758. B<traverse> a tree means to do something to a node, and to traverse its
  759. children. There's two prototypical ways to do this, depending on what
  760. happens when:
  761. traversing X in pre-order:
  762. * do something to X
  763. * then traverse X's children
  764. traversing X in post-order:
  765. * traverse X's children
  766. * then do something to X
  767. Dumping the tree to the screen the way we want it happens to be a matter
  768. of pre-order traversal, since the thing we do (print a description of
  769. the node) happens before we recurse into the successors.
  770. When we try writing the C<print> statement for our above C<dump_tree>,
  771. we can get something like:
  772. sub dump_tree {
  773. my $n = $_[0];
  774. # "xx_xx_oo_oo (O moved 9 to 7, 0 payoff)"
  775. print
  776. $n->{'board'}, " (",
  777. ($n->{'whose_turn'} eq 'o' ? 'X' : 'O'),
  778. # Infer who last moved from whose turn it is now.
  779. " moved ", $n->{'last_move_from'},
  780. " to ", $n->{'last_move_to'},
  781. ", ", $n->{'last_move_payoff'},
  782. " payoff)\n",
  783. ;
  784. foreach my $s (@{$n->{'successors'}}) {
  785. dump_tree($s);
  786. }
  787. }
  788. If we run this on $n0 from above, we get this:
  789. xx_xx_oo_oo (O moved 9 to 7, 0 payoff)
  790. xx__xxoo_oo (X moved 4 to 6, 0 payoff)
  791. xx_x_xoo_oo (X moved 5 to 6, 0 payoff)
  792. Each line on its own is fine, but we forget to allow for indenting, and
  793. without that we can't tell what's a child of what. (Imagine if the
  794. first successor had successors of its own -- you wouldn't be able to
  795. tell if it were a child, or a sibling.) To get indenting, we'll need
  796. to have the instances of the C<dump_tree> routine know how far down in
  797. the tree they're being called, by passing a depth parameter between
  798. them:
  799. sub dump_tree {
  800. my $n = $_[0];
  801. my $depth = $_[1];
  802. $depth = 0 unless defined $depth;
  803. print
  804. " " x $depth,
  805. ...stuff...
  806. foreach my $s (@{$n->{'successors'}}) {
  807. dump_tree($s, $depth + 1);
  808. }
  809. }
  810. When we call C<dump_tree($n0)>, C<$depth> (from C<$_[1]>) is undefined, so
  811. gets set to 0, which translates into an indenting of no spaces. But when
  812. C<dump_tree> invokes itself on C<$n0>'s children, those instances see
  813. C<$depth> + 1 as their C<$_[1]>, giving appropriate indenting.
  814. =over
  815. Footnote:
  816. Passing values around between different invocations of a recursive
  817. routine, as shown, is a decent way to share the data. Another way
  818. to share the data is by keeping it in a global variable, like C<$Depth>,
  819. initially set to 0. Each time C<dump_tree> is about to recurse, it must
  820. C<++$Depth>, and when it's back, it must C<--$Depth>.
  821. Or, if the reader is familiar with closures, consider this approach:
  822. sub dump_tree {
  823. # A wrapper around calls to a recursive closure:
  824. my $start_node = $_[0];
  825. my $depth = 0;
  826. # to be shared across calls to $recursor.
  827. my $recursor;
  828. $recursor = sub {
  829. my $n = $_[0];
  830. print " " x $depth,
  831. ...stuff...
  832. ++$depth;
  833. foreach my $s (@{$n->{'successors'}}) {
  834. $recursor->($s);
  835. }
  836. --$depth;
  837. }
  838. $recursor->($start_node); # start recursing
  839. undef $recursor;
  840. }
  841. The reader with an advanced understanding of Perl's reference-count-based
  842. garbage collection is invited to consider why it is currently necessary
  843. to undef $recursor (or otherwise change its value) after all recursion
  844. is done.
  845. The reader whose mind is perverse in other ways is invited to consider
  846. how (or when!) passing a depth parameter around is unnecessary because
  847. of information that Perl's C<caller(N)> function reports!
  848. [end footnote]
  849. =back
  850. =head2 Growing the Tree
  851. Our C<dump_tree> routine works fine for the sample tree we've got, so
  852. now we should get the program working on making its own trees, starting
  853. from a given board.
  854. In C<Games::Alak> (the CPAN-released version of Alak that uses
  855. essentially the same code that we're currently discussing the
  856. tree-related parts of), there is a routine called C<figure_successors>
  857. that, given one childless node, will figure out all its possible
  858. successors. That is, it looks at the current board, looks at every piece
  859. belonging to the player whose turn it is, and considers the effect of
  860. moving each piece every possible way -- notably, it figures out the
  861. immediate payoff, and if that move would end the game, it notes that by
  862. setting an "endgame" entry in that node's hash. (That way, we know that
  863. that's a node that I<can't> have successors.)
  864. In the code for C<Games::Alak>, C<figure_successors> does all these things,
  865. in a rather straightforward way. I won't walk you through the details
  866. of the C<figure_successors> code I've written, since the code has
  867. nothing much to do with trees, and is all just implementation of the Alak
  868. rules for what can move where, with what result. Espicially interested
  869. readers can puzzle over that part of code in the source listing in the
  870. archive from CPAN, but others can just assume that it works as described
  871. above.
  872. But consider that C<figure_successors>, regardless of its inner
  873. workings, does not grow the I<tree>; it only makes one set of successors
  874. for one node at a time. It has to be up to a different routine to call
  875. C<figure_successors>, and to keep applying it as needed, in order to
  876. make a nice big tree that our game-playing program can base its
  877. decisions on.
  878. Now, we could do this by just starting from one node, applying
  879. C<figure_successors> to it, then applying C<figure_successors> on all
  880. the resulting children, and so on:
  881. sub grow { # Just a first attempt at this!
  882. my $n = $_[0];
  883. figure_successors($n);
  884. unless
  885. @{$n->{'successors'}}
  886. # already has successors.
  887. or $n->{'endgame'}
  888. # can't have successors.
  889. }
  890. foreach my $s (@{$n->{'successors'}}) {
  891. grow($s); # recurse
  892. }
  893. }
  894. If you have a game tree for tic-tac-toe, and you grow it without
  895. limitation (as above), you will soon enough have a fully "solved" tree,
  896. where every node that I<can> have successors I<does>, and all the leaves
  897. of the tree are I<all> the possible endgames (where, in each case, the
  898. board is filled). But a game of Alak is different from tic-tac-toe,
  899. because it can, in theory, go on forever. For example, the following
  900. sequence of moves is quite possible:
  901. xxxx___oooo
  902. xxx_x__oooo
  903. xxx_x_o_ooo
  904. xxxx__o_ooo (x moved back)
  905. xxxx___oooo (o moved back)
  906. ...repeat forever...
  907. So if you tried using our above attempt at a C<grow> routine, Perl would
  908. happily start trying to construct an infinitely deep tree, containing
  909. an infinite number of nodes, consuming an infinite amount of memory, and
  910. requiring an infinite amount of time. As the old saying goes: "You
  911. can't have everything -- where would you put it?" So we have to place
  912. limits on how much we'll grow the tree.
  913. There's more than one way to do this:
  914. 1. We could grow the tree until we hit some limit on the number of
  915. nodes we'll allow in the tree.
  916. 2. We could grow the tree until we hit some limit on the amount of time
  917. we're willing to spend.
  918. 3. Or we could grow the tree until it is fully fleshed out to a certain
  919. depth.
  920. Since we already know to track depth (as we did in writing C<dump_tree>),
  921. we'll do it that way, the third way. The implementation for that third
  922. approach is also pretty straightforward:
  923. $Max_depth = 3;
  924. sub grow {
  925. my $n = $_[0];
  926. my $depth = $_[1] || 0;
  927. figure_successors($n)
  928. unless
  929. $depth >= $Max_depth
  930. or @{$n->{'successors'}}
  931. or $n->{'endgame'}
  932. }
  933. foreach my $s (@{$n->{'successors'}}) {
  934. grow($s, $depth + 1);
  935. }
  936. # If we're at $Max_depth, then figure_successors
  937. # didn't get called, so there's no successors
  938. # to recurse under -- that's what stops recursion.
  939. }
  940. If we start from a single node (whether it's a node for the starting board
  941. "xxxx___oooo", or for whatever board the computer is faced with), set
  942. C<$Max_depth> to 4, and apply C<grow> to it, it will grow the tree to
  943. include several hundred nodes.
  944. =over
  945. Footnote:
  946. If at each move there are four pieces that can move, and they can each
  947. move right or left, the "branching factor" of the tree is eight, giving
  948. a tree with 1 (depth 0) + 8 (depth 1) + 8 ** 2 + 8 ** 3 + 8 ** 4 =
  949. 4681 nodes in it. But, in practice, not all pieces can move in both
  950. directions (none of the x pieces in "xxxx___oooo" can move left, for
  951. example), and there may be fewer than four pieces, if some were lost.
  952. For example, there are 801 nodes in a tree of depth four starting
  953. from "xxxx___oooo", suggesting an average branching factor of about
  954. five (801 ** (1/4) is about 5.3), not eight.
  955. =back
  956. What we need to derive from that tree is the information about what
  957. are the best moves for X. The simplest way to consider the payoff of
  958. different successors is to just average them -- but what we average
  959. isn't always their immediate payoffs (because that'd leave us using
  960. only one generation of information), but the average payoff of I<their>
  961. successors, if any. We can formalize this as:
  962. To figure a node's average payoff:
  963. If the node has successors:
  964. Figure each successor's average payoff.
  965. My average payoff is the average of theirs.
  966. Otherwise:
  967. My average payoff is my immediate payoff.
  968. Since this involves recursing into the successors I<before> doing
  969. anything with the current node, this will traverse the tree
  970. I<in post-order>.
  971. We could work that up as a routine of its own, and apply that to the
  972. tree after we've applied C<grow> to it. But since we'd never
  973. grow the tree without also figuring the average benefit, we might as well
  974. make that figuring part of the C<grow> routine itself:
  975. $Max_depth = 3;
  976. sub grow {
  977. my $n = $_[0];
  978. my $depth = $_[1] || 0;
  979. figure_successors($n);
  980. unless
  981. $depth >= $Max_depth
  982. or @{$n->{'successors'}}
  983. or $n->{'endgame'}
  984. }
  985. if(@{$n->{'successors'}}) {
  986. my $a_payoff_sum = 0;
  987. foreach my $s (@{$n->{'successors'}}) {
  988. grow($s, $depth + 1); # RECURSE
  989. $a_payoff_sum += $s->{'average_payoff'};
  990. }
  991. $n->{'average_payoff'}
  992. = $a_payoff_sum / @{$n->{'successors'}};
  993. } else {
  994. $n->{'average_payoff'}
  995. = $n->{'last_move_payoff'};
  996. }
  997. }
  998. So, by time C<grow> has applied to a node (wherever in the tree it is),
  999. it will have figured successors if possible (which, in turn, sets
  1000. C<last_move_payoff> for each node it creates), and will have set
  1001. C<average_benefit>.
  1002. Beyond this, all that's needed is to start the board out with a root
  1003. note of "xxxx___oooo", and have the computer (X) take turns with the
  1004. user (O) until someone wins. Whenever it's O's turn, C<Games::Alak>
  1005. presents a prompt to the user, letting him know the state of the current
  1006. board, and asking what move he selects. When it's X's turn, the
  1007. computer grows the game tree as necessary (using just the C<grow>
  1008. routine from above), then selects the move with the highest average
  1009. payoff (or one of the highest, in case of a tie).
  1010. In either case, "selecting" a move means just setting that move's node
  1011. as the new root of the program's game tree. Its sibling nodes and their
  1012. descendants (the boards that I<didn't> get selected) and its parent node
  1013. will be erased from memory, since they will no longer be in use (as Perl
  1014. can tell by the fact that nothing holds references to them anymore).
  1015. The interface code in C<Games::Alak> (the code that prompts the user for
  1016. his move) actually supports quite a few options besides just moving --
  1017. including dumping the game tree to a specified depth (using a slightly
  1018. fancier version of C<dump_tree>, above), resetting the game, changing
  1019. C<$Max_depth> in the middle of the game, and quitting the game. Like
  1020. C<figure_successors>, it's a bit too long to print here, but interested
  1021. users are welcome to peruse (and freely modify) the code, as well as to
  1022. enjoy just playing the game.
  1023. Now, in practice, there's more to game trees than this: for games with a
  1024. larger branching factor than Alak has (which is most!), game trees of
  1025. depth four or larger would contain too many nodes to be manageable, most
  1026. of those nodes being strategically quite uninteresting for either
  1027. player; dealing with game trees specifically is therefore a matter of
  1028. recognizing uninteresting contingencies and not bothering to grow the
  1029. tree under them.
  1030. =over
  1031. Footnote:
  1032. For example, to choose a straightforward case: if O has a choice between
  1033. moves that put him in immediate danger of X winning and moves that
  1034. don't, then O won't ever choose the dangerous moves (and if he does, the
  1035. computer will know enough to end the game), so there's no point in
  1036. growing the tree any further beneath those nodes.
  1037. =back
  1038. But this sample implementation should illustrate the basics of
  1039. how to build and manipulate a simple tree structure in memory.
  1040. And once you've understood the basics of tree storage here, you should
  1041. be ready to better understand the complexities and peculiarities of
  1042. other systems for creating, accessing, and changing trees, including
  1043. Tree::DAG_Node, HTML::Element, XML::DOM, or related formalisms
  1044. like XPath and XSL.
  1045. B<[end body of article]>
  1046. =head2 [Author Credit]
  1047. Sean M. Burke (C<[email protected]>) is a tree-dwelling hominid.
  1048. =head2 References
  1049. Dewdney, A[lexander] K[eewatin]. 1984. I<Planiverse: Computer Contact
  1050. with a Two-Dimensional World.> Poseidon Press, New York.
  1051. Knuth, Donald Ervin. 1997. I<Art of Computer Programming, Volume 1,
  1052. Third Edition: Fundamental Algorithms>. Addison-Wesley, Reading, MA.
  1053. Wirth, Niklaus. 1976. I<Algorithms + Data Structures = Programs>
  1054. Prentice-Hall, Englewood Cliffs, NJ.
  1055. Worth, Stan and Allman Sheldon. Circa 1967. I<George of the Jungle>
  1056. theme. [music by Jay Ward.]
  1057. Wirth's classic, currently and lamentably out of print, has a good
  1058. section on trees. I find it clearer than Knuth's (if not quite as
  1059. encyclopedic), probably because Wirth's example code is in a
  1060. block-structured high-level language (basically Pascal), instead
  1061. of in assembler (MIX). I believe the book was re-issued in the
  1062. 1980s under the titles I<Algorithms and Data Structures> and, in a
  1063. German edition, I<Algorithmen und Datenstrukturen>. Cheap copies
  1064. of these editions should be available through used book services
  1065. such as C<abebooks.com>.
  1066. Worth's classic, however, is available on the
  1067. soundtrack to the 1997 I<George of the Jungle> movie, as
  1068. performed by The Presidents of the United States of America.
  1069. =head1 BACK
  1070. Return to the L<HTML::Tree|HTML::Tree> docs.
  1071. =cut