Team Fortress 2 Source Code as on 22/4/2020
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1045 lines
36 KiB

  1. // stb_connected_components - v0.95 - public domain connected components on grids
  2. // http://github.com/nothings/stb
  3. //
  4. // Finds connected components on 2D grids for testing reachability between
  5. // two points, with fast updates when changing reachability (e.g. on one machine
  6. // it was typically 0.2ms w/ 1024x1024 grid). Each grid square must be "open" or
  7. // "closed" (traversable or untraversable), and grid squares are only connected
  8. // to their orthogonal neighbors, not diagonally.
  9. //
  10. // In one source file, create the implementation by doing something like this:
  11. //
  12. // #define STBCC_GRID_COUNT_X_LOG2 10
  13. // #define STBCC_GRID_COUNT_Y_LOG2 10
  14. // #define STB_CONNECTED_COMPONENTS_IMPLEMENTATION
  15. // #include "stb_connected_components.h"
  16. //
  17. // The above creates an implementation that can run on maps up to 1024x1024.
  18. // Map sizes must be a multiple of (1<<(LOG2/2)) on each axis (e.g. 32 if LOG2=10,
  19. // 16 if LOG2=8, etc.) (You can just pad your map with untraversable space.)
  20. //
  21. // MEMORY USAGE
  22. //
  23. // Uses about 6-7 bytes per grid square (e.g. 7MB for a 1024x1024 grid).
  24. // Uses a single worst-case allocation which you pass in.
  25. //
  26. // PERFORMANCE
  27. //
  28. // On a core i7-2700K at 3.5 Ghz, for a particular 1024x1024 map (map_03.png):
  29. //
  30. // Creating map : 44.85 ms
  31. // Making one square traversable: 0.27 ms (average over 29,448 calls)
  32. // Making one square untraversable: 0.23 ms (average over 30,123 calls)
  33. // Reachability query: 0.00001 ms (average over 4,000,000 calls)
  34. //
  35. // On non-degenerate maps update time is O(N^0.5), but on degenerate maps like
  36. // checkerboards or 50% random, update time is O(N^0.75) (~2ms on above machine).
  37. //
  38. // CHANGELOG
  39. //
  40. // 0.95 (2016-10-16) Bugfix if multiple clumps in one cluster connect to same clump in another
  41. // 0.94 (2016-04-17) Bugfix & optimize worst case (checkerboard & random)
  42. // 0.93 (2016-04-16) Reduce memory by 10x for 1Kx1K map; small speedup
  43. // 0.92 (2016-04-16) Compute sqrt(N) cluster size by default
  44. // 0.91 (2016-04-15) Initial release
  45. //
  46. // TODO:
  47. // - better API documentation
  48. // - more comments
  49. // - try re-integrating naive algorithm & compare performance
  50. // - more optimized batching (current approach still recomputes local clumps many times)
  51. // - function for setting a grid of squares at once (just use batching)
  52. //
  53. // LICENSE
  54. //
  55. // See end of file for license information.
  56. //
  57. // ALGORITHM
  58. //
  59. // The NxN grid map is split into sqrt(N) x sqrt(N) blocks called
  60. // "clusters". Each cluster independently computes a set of connected
  61. // components within that cluster (ignoring all connectivity out of
  62. // that cluster) using a union-find disjoint set forest. This produces a bunch
  63. // of locally connected components called "clumps". Each clump is (a) connected
  64. // within its cluster, (b) does not directly connect to any other clumps in the
  65. // cluster (though it may connect to them by paths that lead outside the cluster,
  66. // but those are ignored at this step), and (c) maintains an adjacency list of
  67. // all clumps in adjacent clusters that it _is_ connected to. Then a second
  68. // union-find disjoint set forest is used to compute connected clumps
  69. // globally, across the whole map. Reachability is then computed by
  70. // finding which clump each input point belongs to, and checking whether
  71. // those clumps are in the same "global" connected component.
  72. //
  73. // The above data structure can be updated efficiently; on a change
  74. // of a single grid square on the map, only one cluster changes its
  75. // purely-local state, so only one cluster needs its clumps fully
  76. // recomputed. Clumps in adjacent clusters need their adjacency lists
  77. // updated: first to remove all references to the old clumps in the
  78. // rebuilt cluster, then to add new references to the new clumps. Both
  79. // of these operations can use the existing "find which clump each input
  80. // point belongs to" query to compute that adjacency information rapidly.
  81. #ifndef INCLUDE_STB_CONNECTED_COMPONENTS_H
  82. #define INCLUDE_STB_CONNECTED_COMPONENTS_H
  83. #include <stdlib.h>
  84. typedef struct st_stbcc_grid stbcc_grid;
  85. #ifdef __cplusplus
  86. extern "C" {
  87. #endif
  88. //////////////////////////////////////////////////////////////////////////////////////////
  89. //
  90. // initialization
  91. //
  92. // you allocate the grid data structure to this size (note that it will be very big!!!)
  93. extern size_t stbcc_grid_sizeof(void);
  94. // initialize the grid, value of map[] is 0 = traversable, non-0 is solid
  95. extern void stbcc_init_grid(stbcc_grid *g, unsigned char *map, int w, int h);
  96. //////////////////////////////////////////////////////////////////////////////////////////
  97. //
  98. // main functionality
  99. //
  100. // update a grid square state, 0 = traversable, non-0 is solid
  101. // i can add a batch-update if it's needed
  102. extern void stbcc_update_grid(stbcc_grid *g, int x, int y, int solid);
  103. // query if two grid squares are reachable from each other
  104. extern int stbcc_query_grid_node_connection(stbcc_grid *g, int x1, int y1, int x2, int y2);
  105. //////////////////////////////////////////////////////////////////////////////////////////
  106. //
  107. // bonus functions
  108. //
  109. // wrap multiple stbcc_update_grid calls in these function to compute
  110. // multiple updates more efficiently; cannot make queries inside batch
  111. extern void stbcc_update_batch_begin(stbcc_grid *g);
  112. extern void stbcc_update_batch_end(stbcc_grid *g);
  113. // query the grid data structure for whether a given square is open or not
  114. extern int stbcc_query_grid_open(stbcc_grid *g, int x, int y);
  115. // get a unique id for the connected component this is in; it's not necessarily
  116. // small, you'll need a hash table or something to remap it (or just use
  117. extern unsigned int stbcc_get_unique_id(stbcc_grid *g, int x, int y);
  118. #define STBCC_NULL_UNIQUE_ID 0xffffffff // returned for closed map squares
  119. #ifdef __cplusplus
  120. }
  121. #endif
  122. #endif // INCLUDE_STB_CONNECTED_COMPONENTS_H
  123. #ifdef STB_CONNECTED_COMPONENTS_IMPLEMENTATION
  124. #include <assert.h>
  125. #include <string.h> // memset
  126. #if !defined(STBCC_GRID_COUNT_X_LOG2) || !defined(STBCC_GRID_COUNT_Y_LOG2)
  127. #error "You must define STBCC_GRID_COUNT_X_LOG2 and STBCC_GRID_COUNT_Y_LOG2 to define the max grid supported."
  128. #endif
  129. #define STBCC__GRID_COUNT_X (1 << STBCC_GRID_COUNT_X_LOG2)
  130. #define STBCC__GRID_COUNT_Y (1 << STBCC_GRID_COUNT_Y_LOG2)
  131. #define STBCC__MAP_STRIDE (1 << (STBCC_GRID_COUNT_X_LOG2-3))
  132. #ifndef STBCC_CLUSTER_SIZE_X_LOG2
  133. #define STBCC_CLUSTER_SIZE_X_LOG2 (STBCC_GRID_COUNT_X_LOG2/2) // log2(sqrt(2^N)) = 1/2 * log2(2^N)) = 1/2 * N
  134. #if STBCC_CLUSTER_SIZE_X_LOG2 > 6
  135. #undef STBCC_CLUSTER_SIZE_X_LOG2
  136. #define STBCC_CLUSTER_SIZE_X_LOG2 6
  137. #endif
  138. #endif
  139. #ifndef STBCC_CLUSTER_SIZE_Y_LOG2
  140. #define STBCC_CLUSTER_SIZE_Y_LOG2 (STBCC_GRID_COUNT_Y_LOG2/2)
  141. #if STBCC_CLUSTER_SIZE_Y_LOG2 > 6
  142. #undef STBCC_CLUSTER_SIZE_Y_LOG2
  143. #define STBCC_CLUSTER_SIZE_Y_LOG2 6
  144. #endif
  145. #endif
  146. #define STBCC__CLUSTER_SIZE_X (1 << STBCC_CLUSTER_SIZE_X_LOG2)
  147. #define STBCC__CLUSTER_SIZE_Y (1 << STBCC_CLUSTER_SIZE_Y_LOG2)
  148. #define STBCC__CLUSTER_COUNT_X_LOG2 (STBCC_GRID_COUNT_X_LOG2 - STBCC_CLUSTER_SIZE_X_LOG2)
  149. #define STBCC__CLUSTER_COUNT_Y_LOG2 (STBCC_GRID_COUNT_Y_LOG2 - STBCC_CLUSTER_SIZE_Y_LOG2)
  150. #define STBCC__CLUSTER_COUNT_X (1 << STBCC__CLUSTER_COUNT_X_LOG2)
  151. #define STBCC__CLUSTER_COUNT_Y (1 << STBCC__CLUSTER_COUNT_Y_LOG2)
  152. #if STBCC__CLUSTER_SIZE_X >= STBCC__GRID_COUNT_X || STBCC__CLUSTER_SIZE_Y >= STBCC__GRID_COUNT_Y
  153. #error "STBCC_CLUSTER_SIZE_X/Y_LOG2 must be smaller than STBCC_GRID_COUNT_X/Y_LOG2"
  154. #endif
  155. // worst case # of clumps per cluster
  156. #define STBCC__MAX_CLUMPS_PER_CLUSTER_LOG2 (STBCC_CLUSTER_SIZE_X_LOG2 + STBCC_CLUSTER_SIZE_Y_LOG2-1)
  157. #define STBCC__MAX_CLUMPS_PER_CLUSTER (1 << STBCC__MAX_CLUMPS_PER_CLUSTER_LOG2)
  158. #define STBCC__MAX_CLUMPS (STBCC__MAX_CLUMPS_PER_CLUSTER * STBCC__CLUSTER_COUNT_X * STBCC__CLUSTER_COUNT_Y)
  159. #define STBCC__NULL_CLUMPID STBCC__MAX_CLUMPS_PER_CLUSTER
  160. #define STBCC__CLUSTER_X_FOR_COORD_X(x) ((x) >> STBCC_CLUSTER_SIZE_X_LOG2)
  161. #define STBCC__CLUSTER_Y_FOR_COORD_Y(y) ((y) >> STBCC_CLUSTER_SIZE_Y_LOG2)
  162. #define STBCC__MAP_BYTE_MASK(x,y) (1 << ((x) & 7))
  163. #define STBCC__MAP_BYTE(g,x,y) ((g)->map[y][(x) >> 3])
  164. #define STBCC__MAP_OPEN(g,x,y) (STBCC__MAP_BYTE(g,x,y) & STBCC__MAP_BYTE_MASK(x,y))
  165. typedef unsigned short stbcc__clumpid;
  166. typedef unsigned char stbcc__verify_max_clumps[STBCC__MAX_CLUMPS_PER_CLUSTER < (1 << (8*sizeof(stbcc__clumpid))) ? 1 : -1];
  167. #define STBCC__MAX_EXITS_PER_CLUSTER (STBCC__CLUSTER_SIZE_X + STBCC__CLUSTER_SIZE_Y) // 64 for 32x32
  168. #define STBCC__MAX_EXITS_PER_CLUMP (STBCC__CLUSTER_SIZE_X + STBCC__CLUSTER_SIZE_Y) // 64 for 32x32
  169. #define STBCC__MAX_EDGE_CLUMPS_PER_CLUSTER (STBCC__MAX_EXITS_PER_CLUMP)
  170. // 2^19 * 2^6 => 2^25 exits => 2^26 => 64MB for 1024x1024
  171. // Logic for above on 4x4 grid:
  172. //
  173. // Many clumps: One clump:
  174. // + + + +
  175. // +X.X. +XX.X+
  176. // .X.X+ .XXX
  177. // +X.X. XXX.
  178. // .X.X+ +X.XX+
  179. // + + + +
  180. //
  181. // 8 exits either way
  182. typedef unsigned char stbcc__verify_max_exits[STBCC__MAX_EXITS_PER_CLUMP <= 256];
  183. typedef struct
  184. {
  185. unsigned short clump_index:12;
  186. signed short cluster_dx:2;
  187. signed short cluster_dy:2;
  188. } stbcc__relative_clumpid;
  189. typedef union
  190. {
  191. struct {
  192. unsigned int clump_index:12;
  193. unsigned int cluster_x:10;
  194. unsigned int cluster_y:10;
  195. } f;
  196. unsigned int c;
  197. } stbcc__global_clumpid;
  198. // rebuilt cluster 3,4
  199. // what changes in cluster 2,4
  200. typedef struct
  201. {
  202. stbcc__global_clumpid global_label; // 4
  203. unsigned char num_adjacent; // 1
  204. unsigned char max_adjacent; // 1
  205. unsigned char adjacent_clump_list_index; // 1
  206. unsigned char reserved;
  207. } stbcc__clump; // 8
  208. #define STBCC__CLUSTER_ADJACENCY_COUNT (STBCC__MAX_EXITS_PER_CLUSTER*2)
  209. typedef struct
  210. {
  211. short num_clumps;
  212. unsigned char num_edge_clumps;
  213. unsigned char rebuild_adjacency;
  214. stbcc__clump clump[STBCC__MAX_CLUMPS_PER_CLUSTER]; // 8 * 2^9 = 4KB
  215. stbcc__relative_clumpid adjacency_storage[STBCC__CLUSTER_ADJACENCY_COUNT]; // 256 bytes
  216. } stbcc__cluster;
  217. struct st_stbcc_grid
  218. {
  219. int w,h,cw,ch;
  220. int in_batched_update;
  221. //unsigned char cluster_dirty[STBCC__CLUSTER_COUNT_Y][STBCC__CLUSTER_COUNT_X]; // could bitpack, but: 1K x 1K => 1KB
  222. unsigned char map[STBCC__GRID_COUNT_Y][STBCC__MAP_STRIDE]; // 1K x 1K => 1K x 128 => 128KB
  223. stbcc__clumpid clump_for_node[STBCC__GRID_COUNT_Y][STBCC__GRID_COUNT_X]; // 1K x 1K x 2 = 2MB
  224. stbcc__cluster cluster[STBCC__CLUSTER_COUNT_Y][STBCC__CLUSTER_COUNT_X]; // 1K x 4.5KB = 4.5MB
  225. };
  226. int stbcc_query_grid_node_connection(stbcc_grid *g, int x1, int y1, int x2, int y2)
  227. {
  228. stbcc__global_clumpid label1, label2;
  229. stbcc__clumpid c1 = g->clump_for_node[y1][x1];
  230. stbcc__clumpid c2 = g->clump_for_node[y2][x2];
  231. int cx1 = STBCC__CLUSTER_X_FOR_COORD_X(x1);
  232. int cy1 = STBCC__CLUSTER_Y_FOR_COORD_Y(y1);
  233. int cx2 = STBCC__CLUSTER_X_FOR_COORD_X(x2);
  234. int cy2 = STBCC__CLUSTER_Y_FOR_COORD_Y(y2);
  235. assert(!g->in_batched_update);
  236. if (c1 == STBCC__NULL_CLUMPID || c2 == STBCC__NULL_CLUMPID)
  237. return 0;
  238. label1 = g->cluster[cy1][cx1].clump[c1].global_label;
  239. label2 = g->cluster[cy2][cx2].clump[c2].global_label;
  240. if (label1.c == label2.c)
  241. return 1;
  242. return 0;
  243. }
  244. int stbcc_query_grid_open(stbcc_grid *g, int x, int y)
  245. {
  246. return STBCC__MAP_OPEN(g, x, y) != 0;
  247. }
  248. unsigned int stbcc_get_unique_id(stbcc_grid *g, int x, int y)
  249. {
  250. stbcc__clumpid c = g->clump_for_node[y][x];
  251. int cx = STBCC__CLUSTER_X_FOR_COORD_X(x);
  252. int cy = STBCC__CLUSTER_Y_FOR_COORD_Y(y);
  253. assert(!g->in_batched_update);
  254. if (c == STBCC__NULL_CLUMPID) return STBCC_NULL_UNIQUE_ID;
  255. return g->cluster[cy][cx].clump[c].global_label.c;
  256. }
  257. typedef struct
  258. {
  259. unsigned char x,y;
  260. } stbcc__tinypoint;
  261. typedef struct
  262. {
  263. stbcc__tinypoint parent[STBCC__CLUSTER_SIZE_Y][STBCC__CLUSTER_SIZE_X]; // 32x32 => 2KB
  264. stbcc__clumpid label[STBCC__CLUSTER_SIZE_Y][STBCC__CLUSTER_SIZE_X];
  265. } stbcc__cluster_build_info;
  266. static void stbcc__build_clumps_for_cluster(stbcc_grid *g, int cx, int cy);
  267. static void stbcc__remove_connections_to_adjacent_cluster(stbcc_grid *g, int cx, int cy, int dx, int dy);
  268. static void stbcc__add_connections_to_adjacent_cluster(stbcc_grid *g, int cx, int cy, int dx, int dy);
  269. static stbcc__global_clumpid stbcc__clump_find(stbcc_grid *g, stbcc__global_clumpid n)
  270. {
  271. stbcc__global_clumpid q;
  272. stbcc__clump *c = &g->cluster[n.f.cluster_y][n.f.cluster_x].clump[n.f.clump_index];
  273. if (c->global_label.c == n.c)
  274. return n;
  275. q = stbcc__clump_find(g, c->global_label);
  276. c->global_label = q;
  277. return q;
  278. }
  279. typedef struct
  280. {
  281. unsigned int cluster_x;
  282. unsigned int cluster_y;
  283. unsigned int clump_index;
  284. } stbcc__unpacked_clumpid;
  285. static void stbcc__clump_union(stbcc_grid *g, stbcc__unpacked_clumpid m, int x, int y, int idx)
  286. {
  287. stbcc__clump *mc = &g->cluster[m.cluster_y][m.cluster_x].clump[m.clump_index];
  288. stbcc__clump *nc = &g->cluster[y][x].clump[idx];
  289. stbcc__global_clumpid mp = stbcc__clump_find(g, mc->global_label);
  290. stbcc__global_clumpid np = stbcc__clump_find(g, nc->global_label);
  291. if (mp.c == np.c)
  292. return;
  293. g->cluster[mp.f.cluster_y][mp.f.cluster_x].clump[mp.f.clump_index].global_label = np;
  294. }
  295. static void stbcc__build_connected_components_for_clumps(stbcc_grid *g)
  296. {
  297. int i,j,k,h;
  298. for (j=0; j < STBCC__CLUSTER_COUNT_Y; ++j) {
  299. for (i=0; i < STBCC__CLUSTER_COUNT_X; ++i) {
  300. stbcc__cluster *cluster = &g->cluster[j][i];
  301. for (k=0; k < (int) cluster->num_edge_clumps; ++k) {
  302. stbcc__global_clumpid m;
  303. m.f.clump_index = k;
  304. m.f.cluster_x = i;
  305. m.f.cluster_y = j;
  306. assert((int) m.f.clump_index == k && (int) m.f.cluster_x == i && (int) m.f.cluster_y == j);
  307. cluster->clump[k].global_label = m;
  308. }
  309. }
  310. }
  311. for (j=0; j < STBCC__CLUSTER_COUNT_Y; ++j) {
  312. for (i=0; i < STBCC__CLUSTER_COUNT_X; ++i) {
  313. stbcc__cluster *cluster = &g->cluster[j][i];
  314. for (k=0; k < (int) cluster->num_edge_clumps; ++k) {
  315. stbcc__clump *clump = &cluster->clump[k];
  316. stbcc__unpacked_clumpid m;
  317. stbcc__relative_clumpid *adj;
  318. m.clump_index = k;
  319. m.cluster_x = i;
  320. m.cluster_y = j;
  321. adj = &cluster->adjacency_storage[clump->adjacent_clump_list_index];
  322. for (h=0; h < clump->num_adjacent; ++h) {
  323. unsigned int clump_index = adj[h].clump_index;
  324. unsigned int x = adj[h].cluster_dx + i;
  325. unsigned int y = adj[h].cluster_dy + j;
  326. stbcc__clump_union(g, m, x, y, clump_index);
  327. }
  328. }
  329. }
  330. }
  331. for (j=0; j < STBCC__CLUSTER_COUNT_Y; ++j) {
  332. for (i=0; i < STBCC__CLUSTER_COUNT_X; ++i) {
  333. stbcc__cluster *cluster = &g->cluster[j][i];
  334. for (k=0; k < (int) cluster->num_edge_clumps; ++k) {
  335. stbcc__global_clumpid m;
  336. m.f.clump_index = k;
  337. m.f.cluster_x = i;
  338. m.f.cluster_y = j;
  339. stbcc__clump_find(g, m);
  340. }
  341. }
  342. }
  343. }
  344. static void stbcc__build_all_connections_for_cluster(stbcc_grid *g, int cx, int cy)
  345. {
  346. // in this particular case, we are fully non-incremental. that means we
  347. // can discover the correct sizes for the arrays, but requires we build
  348. // the data into temporary data structures, or just count the sizes, so
  349. // for simplicity we do the latter
  350. stbcc__cluster *cluster = &g->cluster[cy][cx];
  351. unsigned char connected[STBCC__MAX_EDGE_CLUMPS_PER_CLUSTER][STBCC__MAX_EDGE_CLUMPS_PER_CLUSTER/8]; // 64 x 8 => 1KB
  352. unsigned char num_adj[STBCC__MAX_CLUMPS_PER_CLUSTER] = { 0 };
  353. int x = cx * STBCC__CLUSTER_SIZE_X;
  354. int y = cy * STBCC__CLUSTER_SIZE_Y;
  355. int step_x, step_y=0, i, j, k, n, m, dx, dy, total;
  356. int extra;
  357. g->cluster[cy][cx].rebuild_adjacency = 0;
  358. total = 0;
  359. for (m=0; m < 4; ++m) {
  360. switch (m) {
  361. case 0:
  362. dx = 1, dy = 0;
  363. step_x = 0, step_y = 1;
  364. i = STBCC__CLUSTER_SIZE_X-1;
  365. j = 0;
  366. n = STBCC__CLUSTER_SIZE_Y;
  367. break;
  368. case 1:
  369. dx = -1, dy = 0;
  370. i = 0;
  371. j = 0;
  372. step_x = 0;
  373. step_y = 1;
  374. n = STBCC__CLUSTER_SIZE_Y;
  375. break;
  376. case 2:
  377. dy = -1, dx = 0;
  378. i = 0;
  379. j = 0;
  380. step_x = 1;
  381. step_y = 0;
  382. n = STBCC__CLUSTER_SIZE_X;
  383. break;
  384. case 3:
  385. dy = 1, dx = 0;
  386. i = 0;
  387. j = STBCC__CLUSTER_SIZE_Y-1;
  388. step_x = 1;
  389. step_y = 0;
  390. n = STBCC__CLUSTER_SIZE_X;
  391. break;
  392. }
  393. if (cx+dx < 0 || cx+dx >= g->cw || cy+dy < 0 || cy+dy >= g->ch)
  394. continue;
  395. memset(connected, 0, sizeof(connected));
  396. for (k=0; k < n; ++k) {
  397. if (STBCC__MAP_OPEN(g, x+i, y+j) && STBCC__MAP_OPEN(g, x+i+dx, y+j+dy)) {
  398. stbcc__clumpid src = g->clump_for_node[y+j][x+i];
  399. stbcc__clumpid dest = g->clump_for_node[y+j+dy][x+i+dx];
  400. if (0 == (connected[src][dest>>3] & (1 << (dest & 7)))) {
  401. connected[src][dest>>3] |= 1 << (dest & 7);
  402. ++num_adj[src];
  403. ++total;
  404. }
  405. }
  406. i += step_x;
  407. j += step_y;
  408. }
  409. }
  410. assert(total <= STBCC__CLUSTER_ADJACENCY_COUNT);
  411. // decide how to apportion unused adjacency slots; only clumps that lie
  412. // on the edges of the cluster need adjacency slots, so divide them up
  413. // evenly between those clumps
  414. // we want:
  415. // extra = (STBCC__CLUSTER_ADJACENCY_COUNT - total) / cluster->num_edge_clumps;
  416. // but we efficiently approximate this without a divide, because
  417. // ignoring edge-vs-non-edge with 'num_adj[i]*2' was faster than
  418. // 'num_adj[i]+extra' with the divide
  419. if (total + (cluster->num_edge_clumps<<2) <= STBCC__CLUSTER_ADJACENCY_COUNT)
  420. extra = 4;
  421. else if (total + (cluster->num_edge_clumps<<1) <= STBCC__CLUSTER_ADJACENCY_COUNT)
  422. extra = 2;
  423. else if (total + (cluster->num_edge_clumps<<0) <= STBCC__CLUSTER_ADJACENCY_COUNT)
  424. extra = 1;
  425. else
  426. extra = 0;
  427. total = 0;
  428. for (i=0; i < (int) cluster->num_edge_clumps; ++i) {
  429. int alloc = num_adj[i]+extra;
  430. if (alloc > STBCC__MAX_EXITS_PER_CLUSTER)
  431. alloc = STBCC__MAX_EXITS_PER_CLUSTER;
  432. assert(total < 256); // must fit in byte
  433. cluster->clump[i].adjacent_clump_list_index = (unsigned char) total;
  434. cluster->clump[i].max_adjacent = alloc;
  435. cluster->clump[i].num_adjacent = 0;
  436. total += alloc;
  437. }
  438. assert(total <= STBCC__CLUSTER_ADJACENCY_COUNT);
  439. stbcc__add_connections_to_adjacent_cluster(g, cx, cy, -1, 0);
  440. stbcc__add_connections_to_adjacent_cluster(g, cx, cy, 1, 0);
  441. stbcc__add_connections_to_adjacent_cluster(g, cx, cy, 0,-1);
  442. stbcc__add_connections_to_adjacent_cluster(g, cx, cy, 0, 1);
  443. // make sure all of the above succeeded.
  444. assert(g->cluster[cy][cx].rebuild_adjacency == 0);
  445. }
  446. static void stbcc__add_connections_to_adjacent_cluster_with_rebuild(stbcc_grid *g, int cx, int cy, int dx, int dy)
  447. {
  448. if (cx >= 0 && cx < g->cw && cy >= 0 && cy < g->ch) {
  449. stbcc__add_connections_to_adjacent_cluster(g, cx, cy, dx, dy);
  450. if (g->cluster[cy][cx].rebuild_adjacency)
  451. stbcc__build_all_connections_for_cluster(g, cx, cy);
  452. }
  453. }
  454. void stbcc_update_grid(stbcc_grid *g, int x, int y, int solid)
  455. {
  456. int cx,cy;
  457. if (!solid) {
  458. if (STBCC__MAP_OPEN(g,x,y))
  459. return;
  460. } else {
  461. if (!STBCC__MAP_OPEN(g,x,y))
  462. return;
  463. }
  464. cx = STBCC__CLUSTER_X_FOR_COORD_X(x);
  465. cy = STBCC__CLUSTER_Y_FOR_COORD_Y(y);
  466. stbcc__remove_connections_to_adjacent_cluster(g, cx-1, cy, 1, 0);
  467. stbcc__remove_connections_to_adjacent_cluster(g, cx+1, cy, -1, 0);
  468. stbcc__remove_connections_to_adjacent_cluster(g, cx, cy-1, 0, 1);
  469. stbcc__remove_connections_to_adjacent_cluster(g, cx, cy+1, 0,-1);
  470. if (!solid)
  471. STBCC__MAP_BYTE(g,x,y) |= STBCC__MAP_BYTE_MASK(x,y);
  472. else
  473. STBCC__MAP_BYTE(g,x,y) &= ~STBCC__MAP_BYTE_MASK(x,y);
  474. stbcc__build_clumps_for_cluster(g, cx, cy);
  475. stbcc__build_all_connections_for_cluster(g, cx, cy);
  476. stbcc__add_connections_to_adjacent_cluster_with_rebuild(g, cx-1, cy, 1, 0);
  477. stbcc__add_connections_to_adjacent_cluster_with_rebuild(g, cx+1, cy, -1, 0);
  478. stbcc__add_connections_to_adjacent_cluster_with_rebuild(g, cx, cy-1, 0, 1);
  479. stbcc__add_connections_to_adjacent_cluster_with_rebuild(g, cx, cy+1, 0,-1);
  480. if (!g->in_batched_update)
  481. stbcc__build_connected_components_for_clumps(g);
  482. #if 0
  483. else
  484. g->cluster_dirty[cy][cx] = 1;
  485. #endif
  486. }
  487. void stbcc_update_batch_begin(stbcc_grid *g)
  488. {
  489. assert(!g->in_batched_update);
  490. g->in_batched_update = 1;
  491. }
  492. void stbcc_update_batch_end(stbcc_grid *g)
  493. {
  494. assert(g->in_batched_update);
  495. g->in_batched_update = 0;
  496. stbcc__build_connected_components_for_clumps(g); // @OPTIMIZE: only do this if update was non-empty
  497. }
  498. size_t stbcc_grid_sizeof(void)
  499. {
  500. return sizeof(stbcc_grid);
  501. }
  502. void stbcc_init_grid(stbcc_grid *g, unsigned char *map, int w, int h)
  503. {
  504. int i,j,k;
  505. assert(w % STBCC__CLUSTER_SIZE_X == 0);
  506. assert(h % STBCC__CLUSTER_SIZE_Y == 0);
  507. assert(w % 8 == 0);
  508. g->w = w;
  509. g->h = h;
  510. g->cw = w >> STBCC_CLUSTER_SIZE_X_LOG2;
  511. g->ch = h >> STBCC_CLUSTER_SIZE_Y_LOG2;
  512. g->in_batched_update = 0;
  513. #if 0
  514. for (j=0; j < STBCC__CLUSTER_COUNT_Y; ++j)
  515. for (i=0; i < STBCC__CLUSTER_COUNT_X; ++i)
  516. g->cluster_dirty[j][i] = 0;
  517. #endif
  518. for (j=0; j < h; ++j) {
  519. for (i=0; i < w; i += 8) {
  520. unsigned char c = 0;
  521. for (k=0; k < 8; ++k)
  522. if (map[j*w + (i+k)] == 0)
  523. c |= (1 << k);
  524. g->map[j][i>>3] = c;
  525. }
  526. }
  527. for (j=0; j < g->ch; ++j)
  528. for (i=0; i < g->cw; ++i)
  529. stbcc__build_clumps_for_cluster(g, i, j);
  530. for (j=0; j < g->ch; ++j)
  531. for (i=0; i < g->cw; ++i)
  532. stbcc__build_all_connections_for_cluster(g, i, j);
  533. stbcc__build_connected_components_for_clumps(g);
  534. for (j=0; j < g->h; ++j)
  535. for (i=0; i < g->w; ++i)
  536. assert(g->clump_for_node[j][i] <= STBCC__NULL_CLUMPID);
  537. }
  538. static void stbcc__add_clump_connection(stbcc_grid *g, int x1, int y1, int x2, int y2)
  539. {
  540. stbcc__cluster *cluster;
  541. stbcc__clump *clump;
  542. int cx1 = STBCC__CLUSTER_X_FOR_COORD_X(x1);
  543. int cy1 = STBCC__CLUSTER_Y_FOR_COORD_Y(y1);
  544. int cx2 = STBCC__CLUSTER_X_FOR_COORD_X(x2);
  545. int cy2 = STBCC__CLUSTER_Y_FOR_COORD_Y(y2);
  546. stbcc__clumpid c1 = g->clump_for_node[y1][x1];
  547. stbcc__clumpid c2 = g->clump_for_node[y2][x2];
  548. stbcc__relative_clumpid rc;
  549. assert(cx1 != cx2 || cy1 != cy2);
  550. assert(abs(cx1-cx2) + abs(cy1-cy2) == 1);
  551. // add connection to c2 in c1
  552. rc.clump_index = c2;
  553. rc.cluster_dx = x2-x1;
  554. rc.cluster_dy = y2-y1;
  555. cluster = &g->cluster[cy1][cx1];
  556. clump = &cluster->clump[c1];
  557. assert(clump->num_adjacent <= clump->max_adjacent);
  558. if (clump->num_adjacent == clump->max_adjacent)
  559. g->cluster[cy1][cx1].rebuild_adjacency = 1;
  560. else {
  561. stbcc__relative_clumpid *adj = &cluster->adjacency_storage[clump->adjacent_clump_list_index];
  562. assert(clump->num_adjacent < STBCC__MAX_EXITS_PER_CLUMP);
  563. assert(clump->adjacent_clump_list_index + clump->num_adjacent <= STBCC__CLUSTER_ADJACENCY_COUNT);
  564. adj[clump->num_adjacent++] = rc;
  565. }
  566. }
  567. static void stbcc__remove_clump_connection(stbcc_grid *g, int x1, int y1, int x2, int y2)
  568. {
  569. stbcc__cluster *cluster;
  570. stbcc__clump *clump;
  571. stbcc__relative_clumpid *adj;
  572. int i;
  573. int cx1 = STBCC__CLUSTER_X_FOR_COORD_X(x1);
  574. int cy1 = STBCC__CLUSTER_Y_FOR_COORD_Y(y1);
  575. int cx2 = STBCC__CLUSTER_X_FOR_COORD_X(x2);
  576. int cy2 = STBCC__CLUSTER_Y_FOR_COORD_Y(y2);
  577. stbcc__clumpid c1 = g->clump_for_node[y1][x1];
  578. stbcc__clumpid c2 = g->clump_for_node[y2][x2];
  579. stbcc__relative_clumpid rc;
  580. assert(cx1 != cx2 || cy1 != cy2);
  581. assert(abs(cx1-cx2) + abs(cy1-cy2) == 1);
  582. // add connection to c2 in c1
  583. rc.clump_index = c2;
  584. rc.cluster_dx = x2-x1;
  585. rc.cluster_dy = y2-y1;
  586. cluster = &g->cluster[cy1][cx1];
  587. clump = &cluster->clump[c1];
  588. adj = &cluster->adjacency_storage[clump->adjacent_clump_list_index];
  589. for (i=0; i < clump->num_adjacent; ++i)
  590. if (rc.clump_index == adj[i].clump_index &&
  591. rc.cluster_dx == adj[i].cluster_dx &&
  592. rc.cluster_dy == adj[i].cluster_dy)
  593. break;
  594. if (i < clump->num_adjacent)
  595. adj[i] = adj[--clump->num_adjacent];
  596. else
  597. assert(0);
  598. }
  599. static void stbcc__add_connections_to_adjacent_cluster(stbcc_grid *g, int cx, int cy, int dx, int dy)
  600. {
  601. unsigned char connected[STBCC__MAX_EDGE_CLUMPS_PER_CLUSTER][STBCC__MAX_EDGE_CLUMPS_PER_CLUSTER/8] = { 0 };
  602. int x = cx * STBCC__CLUSTER_SIZE_X;
  603. int y = cy * STBCC__CLUSTER_SIZE_Y;
  604. int step_x, step_y=0, i, j, k, n;
  605. if (cx < 0 || cx >= g->cw || cy < 0 || cy >= g->ch)
  606. return;
  607. if (cx+dx < 0 || cx+dx >= g->cw || cy+dy < 0 || cy+dy >= g->ch)
  608. return;
  609. if (g->cluster[cy][cx].rebuild_adjacency)
  610. return;
  611. assert(abs(dx) + abs(dy) == 1);
  612. if (dx == 1) {
  613. i = STBCC__CLUSTER_SIZE_X-1;
  614. j = 0;
  615. step_x = 0;
  616. step_y = 1;
  617. n = STBCC__CLUSTER_SIZE_Y;
  618. } else if (dx == -1) {
  619. i = 0;
  620. j = 0;
  621. step_x = 0;
  622. step_y = 1;
  623. n = STBCC__CLUSTER_SIZE_Y;
  624. } else if (dy == -1) {
  625. i = 0;
  626. j = 0;
  627. step_x = 1;
  628. step_y = 0;
  629. n = STBCC__CLUSTER_SIZE_X;
  630. } else if (dy == 1) {
  631. i = 0;
  632. j = STBCC__CLUSTER_SIZE_Y-1;
  633. step_x = 1;
  634. step_y = 0;
  635. n = STBCC__CLUSTER_SIZE_X;
  636. } else {
  637. assert(0);
  638. }
  639. for (k=0; k < n; ++k) {
  640. if (STBCC__MAP_OPEN(g, x+i, y+j) && STBCC__MAP_OPEN(g, x+i+dx, y+j+dy)) {
  641. stbcc__clumpid src = g->clump_for_node[y+j][x+i];
  642. stbcc__clumpid dest = g->clump_for_node[y+j+dy][x+i+dx];
  643. if (0 == (connected[src][dest>>3] & (1 << (dest & 7)))) {
  644. assert((dest>>3) < sizeof(connected));
  645. connected[src][dest>>3] |= 1 << (dest & 7);
  646. stbcc__add_clump_connection(g, x+i, y+j, x+i+dx, y+j+dy);
  647. if (g->cluster[cy][cx].rebuild_adjacency)
  648. break;
  649. }
  650. }
  651. i += step_x;
  652. j += step_y;
  653. }
  654. }
  655. static void stbcc__remove_connections_to_adjacent_cluster(stbcc_grid *g, int cx, int cy, int dx, int dy)
  656. {
  657. unsigned char disconnected[STBCC__MAX_EDGE_CLUMPS_PER_CLUSTER][STBCC__MAX_EDGE_CLUMPS_PER_CLUSTER/8] = { 0 };
  658. int x = cx * STBCC__CLUSTER_SIZE_X;
  659. int y = cy * STBCC__CLUSTER_SIZE_Y;
  660. int step_x, step_y=0, i, j, k, n;
  661. if (cx < 0 || cx >= g->cw || cy < 0 || cy >= g->ch)
  662. return;
  663. if (cx+dx < 0 || cx+dx >= g->cw || cy+dy < 0 || cy+dy >= g->ch)
  664. return;
  665. assert(abs(dx) + abs(dy) == 1);
  666. if (dx == 1) {
  667. i = STBCC__CLUSTER_SIZE_X-1;
  668. j = 0;
  669. step_x = 0;
  670. step_y = 1;
  671. n = STBCC__CLUSTER_SIZE_Y;
  672. } else if (dx == -1) {
  673. i = 0;
  674. j = 0;
  675. step_x = 0;
  676. step_y = 1;
  677. n = STBCC__CLUSTER_SIZE_Y;
  678. } else if (dy == -1) {
  679. i = 0;
  680. j = 0;
  681. step_x = 1;
  682. step_y = 0;
  683. n = STBCC__CLUSTER_SIZE_X;
  684. } else if (dy == 1) {
  685. i = 0;
  686. j = STBCC__CLUSTER_SIZE_Y-1;
  687. step_x = 1;
  688. step_y = 0;
  689. n = STBCC__CLUSTER_SIZE_X;
  690. } else {
  691. assert(0);
  692. }
  693. for (k=0; k < n; ++k) {
  694. if (STBCC__MAP_OPEN(g, x+i, y+j) && STBCC__MAP_OPEN(g, x+i+dx, y+j+dy)) {
  695. stbcc__clumpid src = g->clump_for_node[y+j][x+i];
  696. stbcc__clumpid dest = g->clump_for_node[y+j+dy][x+i+dx];
  697. if (0 == (disconnected[src][dest>>3] & (1 << (dest & 7)))) {
  698. disconnected[src][dest>>3] |= 1 << (dest & 7);
  699. stbcc__remove_clump_connection(g, x+i, y+j, x+i+dx, y+j+dy);
  700. }
  701. }
  702. i += step_x;
  703. j += step_y;
  704. }
  705. }
  706. static stbcc__tinypoint stbcc__incluster_find(stbcc__cluster_build_info *cbi, int x, int y)
  707. {
  708. stbcc__tinypoint p,q;
  709. p = cbi->parent[y][x];
  710. if (p.x == x && p.y == y)
  711. return p;
  712. q = stbcc__incluster_find(cbi, p.x, p.y);
  713. cbi->parent[y][x] = q;
  714. return q;
  715. }
  716. static void stbcc__incluster_union(stbcc__cluster_build_info *cbi, int x1, int y1, int x2, int y2)
  717. {
  718. stbcc__tinypoint p = stbcc__incluster_find(cbi, x1,y1);
  719. stbcc__tinypoint q = stbcc__incluster_find(cbi, x2,y2);
  720. if (p.x == q.x && p.y == q.y)
  721. return;
  722. cbi->parent[p.y][p.x] = q;
  723. }
  724. static void stbcc__switch_root(stbcc__cluster_build_info *cbi, int x, int y, stbcc__tinypoint p)
  725. {
  726. cbi->parent[p.y][p.x].x = x;
  727. cbi->parent[p.y][p.x].y = y;
  728. cbi->parent[y][x].x = x;
  729. cbi->parent[y][x].y = y;
  730. }
  731. static void stbcc__build_clumps_for_cluster(stbcc_grid *g, int cx, int cy)
  732. {
  733. stbcc__cluster *c;
  734. stbcc__cluster_build_info cbi;
  735. int label=0;
  736. int i,j;
  737. int x = cx * STBCC__CLUSTER_SIZE_X;
  738. int y = cy * STBCC__CLUSTER_SIZE_Y;
  739. // set initial disjoint set forest state
  740. for (j=0; j < STBCC__CLUSTER_SIZE_Y; ++j) {
  741. for (i=0; i < STBCC__CLUSTER_SIZE_X; ++i) {
  742. cbi.parent[j][i].x = i;
  743. cbi.parent[j][i].y = j;
  744. }
  745. }
  746. // join all sets that are connected
  747. for (j=0; j < STBCC__CLUSTER_SIZE_Y; ++j) {
  748. // check down only if not on bottom row
  749. if (j < STBCC__CLUSTER_SIZE_Y-1)
  750. for (i=0; i < STBCC__CLUSTER_SIZE_X; ++i)
  751. if (STBCC__MAP_OPEN(g,x+i,y+j) && STBCC__MAP_OPEN(g,x+i ,y+j+1))
  752. stbcc__incluster_union(&cbi, i,j, i,j+1);
  753. // check right for everything but rightmost column
  754. for (i=0; i < STBCC__CLUSTER_SIZE_X-1; ++i)
  755. if (STBCC__MAP_OPEN(g,x+i,y+j) && STBCC__MAP_OPEN(g,x+i+1,y+j ))
  756. stbcc__incluster_union(&cbi, i,j, i+1,j);
  757. }
  758. // label all non-empty clumps along edges so that all edge clumps are first
  759. // in list; this means in degenerate case we can skip traversing non-edge clumps.
  760. // because in the first pass we only label leaders, we swap the leader to the
  761. // edge first
  762. // first put solid labels on all the edges; these will get overwritten if they're open
  763. for (j=0; j < STBCC__CLUSTER_SIZE_Y; ++j)
  764. cbi.label[j][0] = cbi.label[j][STBCC__CLUSTER_SIZE_X-1] = STBCC__NULL_CLUMPID;
  765. for (i=0; i < STBCC__CLUSTER_SIZE_X; ++i)
  766. cbi.label[0][i] = cbi.label[STBCC__CLUSTER_SIZE_Y-1][i] = STBCC__NULL_CLUMPID;
  767. for (j=0; j < STBCC__CLUSTER_SIZE_Y; ++j) {
  768. i = 0;
  769. if (STBCC__MAP_OPEN(g, x+i, y+j)) {
  770. stbcc__tinypoint p = stbcc__incluster_find(&cbi, i,j);
  771. if (p.x == i && p.y == j)
  772. // if this is the leader, give it a label
  773. cbi.label[j][i] = label++;
  774. else if (!(p.x == 0 || p.x == STBCC__CLUSTER_SIZE_X-1 || p.y == 0 || p.y == STBCC__CLUSTER_SIZE_Y-1)) {
  775. // if leader is in interior, promote this edge node to leader and label
  776. stbcc__switch_root(&cbi, i, j, p);
  777. cbi.label[j][i] = label++;
  778. }
  779. // else if leader is on edge, do nothing (it'll get labelled when we reach it)
  780. }
  781. i = STBCC__CLUSTER_SIZE_X-1;
  782. if (STBCC__MAP_OPEN(g, x+i, y+j)) {
  783. stbcc__tinypoint p = stbcc__incluster_find(&cbi, i,j);
  784. if (p.x == i && p.y == j)
  785. cbi.label[j][i] = label++;
  786. else if (!(p.x == 0 || p.x == STBCC__CLUSTER_SIZE_X-1 || p.y == 0 || p.y == STBCC__CLUSTER_SIZE_Y-1)) {
  787. stbcc__switch_root(&cbi, i, j, p);
  788. cbi.label[j][i] = label++;
  789. }
  790. }
  791. }
  792. for (i=1; i < STBCC__CLUSTER_SIZE_Y-1; ++i) {
  793. j = 0;
  794. if (STBCC__MAP_OPEN(g, x+i, y+j)) {
  795. stbcc__tinypoint p = stbcc__incluster_find(&cbi, i,j);
  796. if (p.x == i && p.y == j)
  797. cbi.label[j][i] = label++;
  798. else if (!(p.x == 0 || p.x == STBCC__CLUSTER_SIZE_X-1 || p.y == 0 || p.y == STBCC__CLUSTER_SIZE_Y-1)) {
  799. stbcc__switch_root(&cbi, i, j, p);
  800. cbi.label[j][i] = label++;
  801. }
  802. }
  803. j = STBCC__CLUSTER_SIZE_Y-1;
  804. if (STBCC__MAP_OPEN(g, x+i, y+j)) {
  805. stbcc__tinypoint p = stbcc__incluster_find(&cbi, i,j);
  806. if (p.x == i && p.y == j)
  807. cbi.label[j][i] = label++;
  808. else if (!(p.x == 0 || p.x == STBCC__CLUSTER_SIZE_X-1 || p.y == 0 || p.y == STBCC__CLUSTER_SIZE_Y-1)) {
  809. stbcc__switch_root(&cbi, i, j, p);
  810. cbi.label[j][i] = label++;
  811. }
  812. }
  813. }
  814. c = &g->cluster[cy][cx];
  815. c->num_edge_clumps = label;
  816. // label any internal clusters
  817. for (j=1; j < STBCC__CLUSTER_SIZE_Y-1; ++j) {
  818. for (i=1; i < STBCC__CLUSTER_SIZE_X-1; ++i) {
  819. stbcc__tinypoint p = cbi.parent[j][i];
  820. if (p.x == i && p.y == j)
  821. if (STBCC__MAP_OPEN(g,x+i,y+j))
  822. cbi.label[j][i] = label++;
  823. else
  824. cbi.label[j][i] = STBCC__NULL_CLUMPID;
  825. }
  826. }
  827. // label all other nodes
  828. for (j=0; j < STBCC__CLUSTER_SIZE_Y; ++j) {
  829. for (i=0; i < STBCC__CLUSTER_SIZE_X; ++i) {
  830. stbcc__tinypoint p = stbcc__incluster_find(&cbi, i,j);
  831. if (p.x != i || p.y != j) {
  832. if (STBCC__MAP_OPEN(g,x+i,y+j))
  833. cbi.label[j][i] = cbi.label[p.y][p.x];
  834. }
  835. if (STBCC__MAP_OPEN(g,x+i,y+j))
  836. assert(cbi.label[j][i] != STBCC__NULL_CLUMPID);
  837. }
  838. }
  839. c->num_clumps = label;
  840. for (i=0; i < label; ++i) {
  841. c->clump[i].num_adjacent = 0;
  842. c->clump[i].max_adjacent = 0;
  843. }
  844. for (j=0; j < STBCC__CLUSTER_SIZE_Y; ++j)
  845. for (i=0; i < STBCC__CLUSTER_SIZE_X; ++i) {
  846. g->clump_for_node[y+j][x+i] = cbi.label[j][i]; // @OPTIMIZE: remove cbi.label entirely
  847. assert(g->clump_for_node[y+j][x+i] <= STBCC__NULL_CLUMPID);
  848. }
  849. // set the global label for all interior clumps since they can't have connections,
  850. // so we don't have to do this on the global pass (brings from O(N) to O(N^0.75))
  851. for (i=(int) c->num_edge_clumps; i < (int) c->num_clumps; ++i) {
  852. stbcc__global_clumpid gc;
  853. gc.f.cluster_x = cx;
  854. gc.f.cluster_y = cy;
  855. gc.f.clump_index = i;
  856. c->clump[i].global_label = gc;
  857. }
  858. c->rebuild_adjacency = 1; // flag that it has no valid adjacency data
  859. }
  860. #endif // STB_CONNECTED_COMPONENTS_IMPLEMENTATION
  861. /*
  862. ------------------------------------------------------------------------------
  863. This software is available under 2 licenses -- choose whichever you prefer.
  864. ------------------------------------------------------------------------------
  865. ALTERNATIVE A - MIT License
  866. Copyright (c) 2017 Sean Barrett
  867. Permission is hereby granted, free of charge, to any person obtaining a copy of
  868. this software and associated documentation files (the "Software"), to deal in
  869. the Software without restriction, including without limitation the rights to
  870. use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
  871. of the Software, and to permit persons to whom the Software is furnished to do
  872. so, subject to the following conditions:
  873. The above copyright notice and this permission notice shall be included in all
  874. copies or substantial portions of the Software.
  875. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  876. IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  877. FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
  878. AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
  879. LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
  880. OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
  881. SOFTWARE.
  882. ------------------------------------------------------------------------------
  883. ALTERNATIVE B - Public Domain (www.unlicense.org)
  884. This is free and unencumbered software released into the public domain.
  885. Anyone is free to copy, modify, publish, use, compile, sell, or distribute this
  886. software, either in source code form or as a compiled binary, for any purpose,
  887. commercial or non-commercial, and by any means.
  888. In jurisdictions that recognize copyright laws, the author or authors of this
  889. software dedicate any and all copyright interest in the software to the public
  890. domain. We make this dedication for the benefit of the public at large and to
  891. the detriment of our heirs and successors. We intend this dedication to be an
  892. overt act of relinquishment in perpetuity of all present and future rights to
  893. this software under copyright law.
  894. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  895. IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  896. FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
  897. AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
  898. ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
  899. WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  900. ------------------------------------------------------------------------------
  901. */