Source code of Windows XP (NT5)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

994 lines
43 KiB

  1. **********************************************************************
  2. Upgrading and Installing on Cluster Nodes
  3. Release Notes, Part 4 of 4
  4. Beta 2
  5. **********************************************************************
  6. (c) 2001 Microsoft Corporation. All rights reserved.
  7. These notes support a preliminary release of a software program that
  8. bears the project code name Whistler.
  9. With Whistler Advanced Server, you can use clustering to ensure that
  10. users have constant access to important server-based resources. With
  11. clustering, you create several cluster nodes that appear to users as
  12. one server. If one of the nodes in the cluster fails, another node
  13. begins to provide service (a process known as failover). Mission
  14. critical applications and resources remain continuously available.
  15. Sections to read if you are upgrading:
  16. 1.0 Preparing for Upgrading or Installing Clustering
  17. 1.2 Options for Upgrading or Installing Clustering
  18. 2.0 Upgrading a Cluster from Windows 2000 to Whistler
  19. 2.1 How Rolling Upgrades Work
  20. 2.2 Restrictions on Rolling Upgrades
  21. 2.3 Resource Behavior During Rolling Upgrades
  22. 2.4 Alternatives to Rolling Upgrades from Windows 2000
  23. 3.0 Upgrading Clusters from Windows NT Server 4.0,
  24. Enterprise Edition
  25. Sections to read if you are performing a new installation:
  26. 1.0 Preparing for Upgrading or Installing Clustering
  27. 1.2 Options for Upgrading or Installing Clustering
  28. 4.0 Installation on Cluster Nodes
  29. ======================================================================
  30. 1.0 Upgrading or Installing Clustering
  31. ======================================================================
  32. Before installing or upgrading clustering, you should familiarize
  33. yourself with the basic preparations needed and the options available
  34. for upgrading and installing. The following sections provide
  35. information on these topics.
  36. 1.1 Preparing for Upgrading or Installing Clustering
  37. ======================================================================
  38. To prepare for installing or upgrading clustering, review the
  39. sections earlier in this text file series. As described in those
  40. sections, check the Hardware Compatibility List to ensure that all
  41. your hardware (including your cluster storage) is compatible with
  42. Whistler Advanced Server. In addition, check with the manufacturer of
  43. your cluster storage hardware to be sure you have the drivers you need
  44. in order to use the hardware with Whistler Advanced Server.
  45. Important: If your cluster storage uses SCSI, you can have two nodes
  46. in the cluster, but no more. If you want to have more than two nodes
  47. in the cluster, you must use Fibre Channel for the cluster storage.
  48. 1.2 Options for Upgrading or Installing Clustering
  49. ======================================================================
  50. When installing or upgrading clustering, you can choose among several
  51. options. You can:
  52. * Upgrade the operating system on a cluster that is running
  53. Microsoft Windows NT Server version 4.0, Enterprise Edition. For
  54. a description of the ways you can do this, see "Upgrading
  55. Clusters from Windows NT Server 4.0, Enterprise Edition"
  56. later in this text file.
  57. * Upgrade a cluster that is running Windows 2000, possibly
  58. through a rolling upgrade. For more information, see "How
  59. Rolling Upgrades Work" and "Restrictions on Rolling Upgrades"
  60. later in this text file.
  61. * Perform a new installation of Whistler Advanced Server and install
  62. Cluster service at the same time. For important information
  63. about preparing for cluster installation, see "Installation on
  64. Cluster Nodes" later in this text file.
  65. Note: For cluster disks, you must use the NTFS file system and
  66. configure the disks as basic disks. You cannot configure cluster disks
  67. as dynamic disks, and you cannot use features of dynamic disks such as
  68. spanned volumes (volume sets). For more information about the
  69. limitations of server clusters, in Whistler Help and Support Services.
  70. To open Whistler Help and Support Services, after completing Setup,
  71. click Start, and then click Help and Support.
  72. For information about reinstalling clustering on one of the cluster
  73. nodes, see Whistler Help and Support Services.
  74. ======================================================================
  75. 2.0 Upgrading a Cluster from Windows 2000 to Whistler
  76. ======================================================================
  77. If you are upgrading from Windows 2000 to Whistler on cluster nodes,
  78. you might be able to perform a rolling upgrade of the operating
  79. system. In a rolling upgrade, you sequentially upgrade the operating
  80. system on each node, making sure that one node is always available to
  81. handle client requests. When you upgrade the operating system, the
  82. Cluster service is automatically upgraded also. A rolling upgrade
  83. maximizes availability of clustered services and minimizes
  84. administrative complexity. For more information, see the following
  85. section, "How Rolling Upgrades Work."
  86. To determine whether you can perform a rolling upgrade and understand
  87. the effect that a rolling upgrade might have on your clustered
  88. resources, see "Restrictions on Rolling Upgrades" later in this text
  89. file. For information about ways to upgrade your cluster nodes if you
  90. cannot perform a rolling upgrade, see "Alternatives to Rolling
  91. Upgrades from Windows 2000" later in this text file.
  92. If you are upgrading from Windows NT Server 4.0, Enterprise
  93. Edition to Whistler on cluster nodes, you cannot perform a rolling
  94. upgrade. For more information about how to perform an upgrade from
  95. Windows NT Server 4.0, Enterprise Edition, see "Upgrading Clusters
  96. From Windows NT Server 4.0, Enterprise Edition" later in this text
  97. file series.
  98. Important: If your cluster storage uses SCSI, you can have two nodes
  99. in the cluster, but no more. If you want to have more than two nodes
  100. in the cluster, you must use Fibre Channel for the cluster storage.
  101. 2.1 How Rolling Upgrades Work
  102. ======================================================================
  103. This section describes rolling upgrades on server clusters. For
  104. information about methods, restrictions, and alternatives to rolling
  105. upgrades, see the following sections.
  106. There are two major advantages to a rolling upgrade. First, there is
  107. a minimal interruption of service to clients. (However, server
  108. response time might decrease during the phases in which one node
  109. handles the work of the entire cluster.) Second, you do not have to
  110. recreate your cluster configuration. The configuration remains intact
  111. during the upgrade process.
  112. A rolling upgrade starts with two cluster nodes that are running
  113. Windows 2000. In this example, they are named Node 1 and Node 2.
  114. Phase 1: Preliminary
  115. Each node runs Windows 2000 Advanced Server with the following
  116. software:
  117. * The Cluster service component (one of the optional components of
  118. Windows 2000 Advanced Server).
  119. * Applications that support a rolling upgrade. For more information,
  120. see the product documentation and "Resource Behavior During
  121. Rolling Upgrades" later in this text file.
  122. At this point, your cluster is configured so that each node handles
  123. client requests (an active/active configuration).
  124. Phase 2: Upgrade Node 1
  125. Node 1 is paused, and Node 2 handles all cluster resource groups while
  126. you upgrade the operating system of Node 1 to Whistler Advanced
  127. Server.
  128. Phase 3: Upgrade Node 2
  129. Node 1 rejoins the cluster. Node 2 is paused and Node 1 handles all
  130. cluster resource groups while you upgrade the operating system on
  131. Node 2.
  132. Phase 4: Final
  133. Node 2 rejoins the cluster, and you redistribute the resource groups
  134. back to the active/active cluster configuration.
  135. Important: If your goal is to have more than two nodes in the cluster,
  136. you must use Fibre Channel (not SCSI) for the cluster storage. Add the
  137. third or fourth nodes after completing the rolling upgrade. For
  138. cluster disks, you must use the NTFS file system and configure the
  139. disks as basic disks. You cannot configure cluster disks as dynamic
  140. disks, and you cannot use features of dynamic disks such as spanned
  141. volumes (volume sets).
  142. 2.1.1 Performing a Rolling Upgrade
  143. ----------------------------------------------------------------------
  144. For an outline of the rolling upgrade process, see the preceding
  145. section, "How Rolling Upgrades Work."
  146. Important: For information about what resources are supported during
  147. Rolling upgrades, see "Restrictions on Rolling Upgrades" and "Resource
  148. Behavior During Rolling Upgrades" later in this text file.
  149. >>> To perform a rolling upgrade:
  150. 1. In Cluster Administrator, click the node that you want to upgrade
  151. first.
  152. 2. On the File menu, click Pause Node.
  153. 3. In the right pane, double-click Active Groups.
  154. 4. In the right pane, click a group, and then on the File menu, click
  155. Move Group. Repeat this step for each group listed.
  156. The services will be interrupted during the time they are being
  157. moved and restarted on the other node. After the groups are
  158. moved, one node is idle, and the other node handles all client
  159. requests.
  160. 5. Use Whistler Advanced Server Setup to upgrade the paused node from
  161. Windows 2000. For information about running Setup, see sections
  162. earlier in this text file.
  163. Setup detects the earlier version of clustering on the paused node
  164. and automatically installs clustering for Whistler Advanced
  165. Server. The node automatically rejoins the cluster at the end of
  166. the upgrade process, but is still paused and does not handle any
  167. cluster-related work.
  168. 6. To verify that the node that was upgraded is fully functional,
  169. perform validation tests on it.
  170. 7. In Cluster Administrator, click the node that was paused, and then
  171. on the file menu, click Resume Node.
  172. 8. Repeat the preceding steps for any remaining node or nodes.
  173. 2.2 Restrictions on Rolling Upgrades
  174. ======================================================================
  175. There are several basic restrictions to the rolling-upgrade process.
  176. The most basic restriction is as follows:
  177. * You can perform a rolling upgrade only if you are upgrading from
  178. Windows 2000 on the cluster nodes. You cannot perform a rolling
  179. upgrade if you are upgrading from Windows NT Server 4.0,
  180. Enterprise Edition. For a description of the ways to upgrade
  181. from Windows NT 4.0, see "Upgrading Clusters from Windows NT
  182. Server 4.0, Enterprise Edition" later in this text file
  183. series.
  184. The remaining restrictions involve the beginning of Phase 3, in which
  185. you operate a mixed-version cluster: a cluster in which the nodes run
  186. different versions of the operating system. For a mixed-version
  187. cluster to work, the different versions of the software running on
  188. each node must be prepared to communicate with one another. This
  189. requirement leads to several basic restrictions on the rolling-upgrade
  190. process.
  191. * For a successful rolling upgrade, every resource that the cluster
  192. manages must be capable of a rolling upgrade. For more
  193. information, see "Resource Behavior During Rolling Upgrades"
  194. later in this text file.
  195. * During the mixed-version phase of a rolling upgrade, when the
  196. cluster nodes are running different versions of the operating
  197. system, do not change the settings of resources (for example, do
  198. not change the settings of a printer resource).
  199. If preceding restrictions cannot be met, do not perform a rolling
  200. upgrade. For more information, see "Alternatives to Rolling Upgrades
  201. from Windows 2000" later in this text file.
  202. 2.2.1 Operation of New Resource Types in Mixed-Version Clusters
  203. ----------------------------------------------------------------------
  204. If a resource type that you add to the cluster is supported in one
  205. version of the operating system but not in the other, the operation of
  206. a mixed-version cluster is complicated. For example, Cluster service
  207. in Whistler (part of the Advanced Server and Datacenter Server
  208. products) supports the Generic Script resource type. However, older
  209. versions of Cluster service do not support it. A mixed-version
  210. cluster can run a Generic Script resource on a node running Whistler
  211. but not on a node running Windows 2000.
  212. Cluster service transparently sets the possible owners of new
  213. resource types to prevent these resources from failing over to a
  214. Windows 2000 node of a mixed-version cluster. In other words, when you
  215. view the possible owners of a new resource type, a Windows 2000 node
  216. will not be in the list, and you will not be able to add this node to
  217. the list. If you create such a resource during the mixed-version phase
  218. of a rolling upgrade, the resource groups containing those resources
  219. will not fail over to a Windows 2000 node.
  220. 2.3 Resource Behavior During Rolling Upgrades
  221. ======================================================================
  222. Although Cluster service supports rolling upgrades, not all
  223. applications have seamless rolling-upgrade behavior. The following
  224. table describes which resources will be supported during a rolling
  225. upgrade. If you have a resource that is not fully supported during
  226. rolling upgrades, see "Alternatives to Rolling Upgrades from
  227. Windows 2000" later in this text file.
  228. You cannot perform a rolling upgrade on a cluster running Windows NT.
  229. Only clusters running Windows 2000 support rolling upgrades to
  230. Whistler.
  231. RESOURCE ROLLING UPGRADE NOTES
  232. ------------- ---------------------------------------------------
  233. DHCP Supported during rolling upgrades.
  234. File Share Supported during rolling upgrades.
  235. IP Address Supported during rolling upgrades.
  236. Network Name Supported during rolling upgrades.
  237. NNTP Supported during rolling upgrades.
  238. Physical Disk Supported during rolling upgrades
  239. Time Service Supported during rolling upgrades.
  240. SMTP Supported during rolling upgrades.
  241. WINS Supported during rolling upgrades.
  242. Print Spooler The only Print Spooler resources supported
  243. during a rolling upgrade are those on LPR ports
  244. or standard monitor ports. See the following
  245. section, "Upgrades that Include a Print Spooler
  246. Resource."
  247. IIS Internet Information Server (IIS) 6.0 is not
  248. supported during rolling upgrades. For more
  249. information, see "Upgrades the include an IIS
  250. resource" later in this text file.
  251. MS DTC Microsoft Distributed Transaction
  252. Coordinator is not supported during a rolling
  253. upgrade. However, you can perform a process
  254. similar to rolling upgrades. See "Upgrades that
  255. Include an MS DTC Resource" later in this text file
  256. series.
  257. MSMQ Microsoft Message Queuing is not supported
  258. during a rolling upgrade. To upgrade a cluster
  259. which includes MSMQ, see "Upgrades that Include
  260. an MSMQ Resource" later in this text file.
  261. Other resource See Read1st.txt and Readme.doc in the root
  262. types directory of the Whistler Advanced Server CD.
  263. Also see the product documentation that comes
  264. with the application or resource.
  265. 2.3.1 Upgrades that Include a Print Spooler Resource
  266. ----------------------------------------------------------------------
  267. If you want to perform a rolling upgrade of a cluster that has a
  268. Print Spooler resource, you must consider two issues.
  269. First, the Print Spooler resource only supports upgrades (including
  270. rolling upgrades or any other kind of upgrade) on printers on
  271. cluster-supported ports (LPR or Standard Monitor ports). For
  272. information about what to do if your printer is not supported, see
  273. "Alternatives to Rolling Upgrades from Windows 2000" later in this
  274. text file.
  275. Second, when you operate a mixed-version cluster including a Print
  276. Spooler resource, note the following:
  277. * Do not change printer settings in a mixed-version cluster with a
  278. Print Spooler resource.
  279. * If you add a new printer, when you install the drivers for that
  280. printer, be sure to install both the driver for Windows 2000 and
  281. the driver for Whistler on all nodes.
  282. * If printing preferences or defaults are important, be sure to
  283. check them. Printing preferences in Whistler won't necessarily
  284. correspond to document defaults for the same printer in Windows
  285. 2000. This can be affected by differences in the drivers for the
  286. two operating systems.
  287. When the rolling upgrade is complete and both cluster nodes are
  288. running the updated operating system, you can make any modifications
  289. you choose to your printer configuration.
  290. 2.4 Alternatives to Rolling Upgrades from Windows 2000
  291. ======================================================================
  292. Certain resources are not supported during rolling upgrades,
  293. including:
  294. * Internet Information Server (IIS)
  295. * Microsoft Data Transaction Coordinator (MS DTC)
  296. * Microsoft Message Queuing (MSMQ)
  297. Special procedures, described below, must be followed when performing
  298. an upgrade of a cluster that contains these resources. In addition to
  299. the three resource types above, you might also have other resources
  300. that are not supported during rolling upgrades. Be sure to read
  301. Read1st.txt and Readme.doc in the root directory of the Whistler CD,
  302. as well as the product documentation that comes with the application
  303. or resource.
  304. Note: You also cannot perform a rolling upgrade when upgrading from
  305. Windows NT Server 4.0, Enterprise Edition. For more information, see
  306. "Upgrading Clusters from Windows NT Server 4.0, Enterprise Edition"
  307. later in this text file.
  308. 2.4.1 Upgrades that Include an IIS Resource
  309. ----------------------------------------------------------------------
  310. IIS 6.0 is not supported during rolling upgrades. With earlier
  311. versions of IIS, you could configure an individual Web site to fail
  312. over as a cluster resource. However, with IIS 6.0, the entire IIS
  313. service must fail over, not individual Web sites. If you have
  314. individual Web sites or the IIS service configured as a cluster
  315. resource, you must use the following procedure to upgrade to Whistler.
  316. >>> To upgrade from Windows 2000 on a cluster that includes an IIS resource:
  317. 1. Remove any individual Web sites that you have configured as
  318. cluster resources from their cluster group. You can no longer
  319. designate a single site as a cluster resource.
  320. 2. If you have the IIS service configured as a cluster resource, take
  321. this resource offline. To take the resource offline, follow the
  322. procedures described in "Upgrades for Other Non-Supported
  323. Resources" later in this text file.
  324. 3. Perform a rolling upgrade, as described in the procedure "To
  325. perform a rolling upgrade" earlier in this text file.
  326. 4. Once you have completed the upgrade, you can bring the IIS service
  327. back online.
  328. Important: With IIS 6.0, you can only configure the IIS service as a
  329. Cluster resource. You cannot configure individual Web sites as cluster
  330. resources.
  331. 2.4.2 Upgrades that Include an MS DTC Resource
  332. ----------------------------------------------------------------------
  333. Microsoft Distributed Transaction Coordinator (MS DTC) is not
  334. Supported during rolling upgrades. However, you can perform a process
  335. similar to a rolling upgrade.
  336. >>> To upgrade from Windows 2000 on a cluster that includes an MS DTC
  337. resource:
  338. 1. Take the MS DTC resource offline by using the Cluster Administrator
  339. and clicking the Resources folder. In the details pane, click the
  340. MS DTC resource, then on the File menu, click Take Offline.
  341. Caution: Taking a resource offline causes all resources that depend
  342. on that resource to be taken offline.
  343. 2. Configure the MS DTC resource so that the only allowable owner
  344. is the node it is currently on by using the Cluster
  345. Administrator and clicking the Resource folder. In the details
  346. pane, click the MS DTC resource. On the File menu, click
  347. Properties. On the General tab, next to Possible owners, click
  348. Modify. Specify Node 2 as an Available node, and if necessary,
  349. remove Node 1 from the Available nodes list.
  350. 3. Upgrade a node that does not contain the MS DTC resource from
  351. Windows 2000 to Whistler. For general information about Setup,
  352. review the sections earlier in this text file series.
  353. 4. Move the MS DTC resource to the upgraded nodes, following the
  354. procedures as described in step 1.
  355. 5. Configure the MS DTC resource so that the only allowable owner
  356. is the upgraded node, following the procedures as described in
  357. step 2.
  358. 6. Upgrade the remaining nodes from Windows 2000 to Whistler.
  359. 7. Configure the allowable owners for the MS DTC resource as
  360. appropriate for your configuration.
  361. 8. Manually restart all dependent services, and then bring the MS DTC
  362. resource back online by using the Cluster Administrator
  363. and clicking the Resources folder. In the details pane, click
  364. the MS DTC resource, and then on the File menu, click Bring Online.
  365. 2.4.3 Upgrades That Include an MSMQ Resource
  366. ----------------------------------------------------------------------
  367. Microsoft Message Queuing (MSMQ) does not support rolling upgrades.
  368. The MSMQ resource is dependent on the MS DTC resource, so be sure to
  369. follow the steps outlined in the preceding section "Upgrades that
  370. Include an MS DTC Resource."
  371. >>> To upgrade from Windows 2000 on a cluster that includes an MSMQ resource:
  372. 1. Upgrade the operating system of the nodes to Whistler.
  373. 2. Click Start, point to Programs, point to Administrative Tools, and
  374. then click Configure Your Server.
  375. 3. In Configure Your Server, click Finish Setup, and then click
  376. Configure Message Queuing Cluster Resources.
  377. 4. Follow the instructions that appear in the Configure Message
  378. Queuing Cluster Resources Wizard.
  379. 2.4.4 Upgrades for Other Non-Supported Resources
  380. ----------------------------------------------------------------------
  381. If you have other resources on your cluster that are not supported
  382. during a rolling upgrade, but are not described above, take those
  383. resources offline prior to performing the rolling upgrade.
  384. >>> To take a resource offline and perform a rolling upgrade:
  385. 1. Confirm that your systems are running Windows 2000.
  386. 2. Using the information in "Resource Behavior During Rolling
  387. Upgrades" earlier in this text file, list the resources
  388. in your cluster that are not supported during rolling upgrades.
  389. 3. In Cluster Administrator, click the Resources folder.
  390. 4. In the right pane, click the resource you want.
  391. 5. On the File menu, click Take Offline.
  392. 6. Repeat the preceding steps until you have taken offline all
  393. resources that do not support rolling upgrades.
  394. 7. Perform a rolling upgrade, as described in the procedure "To
  395. perform a rolling upgrade" earlier in this text file.
  396. 8. For each resource that you listed in step 2, follow the
  397. product's instructions for installing or reconfiguring the
  398. application so that it will run with Whistler.
  399. ======================================================================
  400. 3.0 Upgrading Clusters from Windows NT Server 4.0, Enterprise
  401. Edition
  402. ======================================================================
  403. You cannot perform a rolling upgrade directly from Windows NT Server
  404. 4.0, Enterprise Edition to Whistler. You instead have two options. You
  405. can maintain cluster availability by performing an upgrade to Windows
  406. 2000 first, then to Whistler, or you can upgrade directly to Whistler.
  407. If you upgrade directly from Windows NT 4.0 to Whistler, you cannot
  408. maintain cluster availability.
  409. 3.1 Upgrading from Windows NT 4.0 while Maintaining Cluster
  410. Availability
  411. ======================================================================
  412. To maintain cluster availability when upgrading from Windows NT 4.0
  413. to Whistler, you must first upgrade to Windows 2000.
  414. >>> To perform an upgrade from Windows NT 4.0 while maintaining
  415. cluster availability:
  416. 1. Perform a rolling upgrade on one node from Windows NT 4.0 to
  417. Windows 2000 as documented in "To Perform a Rolling Upgrade" in the
  418. Windows 2000 documentation. However, do not repeat the process
  419. for the other nodes as documented in those instructions.
  420. Important: For step 1, be sure to follow the procedures in the Windows
  421. 2000 documentation, not the Whistler procedures, as the procedures are
  422. different for each version.
  423. 2. Perform an upgrade on all other nodes from Windows NT 4.0 to
  424. Whistler. For more information, see "Performing a Rolling
  425. Upgrade" earlier in this text file. Follow the instructions,
  426. upgrading only Node 2, not Node 1. For general information about
  427. Setup, review the sections earlier in this text file series.
  428. 3. Perform an upgrade on the Windows 2000 node from Windows 2000 to
  429. Whistler.
  430. 3.2 Upgrading from Windows NT 4.0 While Not Maintaining Cluster
  431. Availability
  432. ======================================================================
  433. To upgrade from Windows NT 4.0 to Whistler without the intermediate
  434. step of upgrading to Windows 2000, you must interrupt cluster
  435. availability. The steps you perform to upgrade while not maintaining
  436. cluster availability depend on the hardware you are using for your
  437. cluster: either a Fibre Channel bus or a SCSI bus.
  438. >>> To upgrade directly from Windows NT 4.0 to Whistler when using a
  439. Fibre Channel bus:
  440. 1. As appropriate, notify users that you will be shutting down the
  441. applications they use on the cluster.
  442. 2. Stop the applications that are made available through the cluster.
  443. 3. To stop Cluster service on all nodes but one, in Cluster
  444. Administrator, click each node you want to stop, and then on the
  445. File menu, click Stop Cluster Service.
  446. 4. Shut down and turn off all nodes but one.
  447. Caution: Be sure that only one node is running before continuing. This
  448. prevents corruption of the cluster storage.
  449. 5. Upgrade the operating system on the node that is running. For
  450. general information about Setup, review the sections earlier in
  451. this text file series.
  452. 6. The cluster software will be upgraded automatically during the
  453. operating system upgrade. Note that you cannot make
  454. configuration changes such as configuring cluster disks as
  455. dynamic disks. For more information about the limitations of server
  456. clusters, see Whistler Help and Support Services.
  457. 7. On the node that is running, click Start, point to Programs, point
  458. to Administrative Tools, and then click Cluster Administrator.
  459. 8. Check to see that the cluster disks are online in Cluster
  460. Administrator.
  461. Caution: Be sure that that the cluster disks are online in Cluster
  462. Administrator before continuing to the next step. When the disks are
  463. online, it means that Cluster service is working, which means that
  464. only one node can access the cluster storage at any given time. This
  465. prevents corruption of the cluster storage.
  466. 9. Turn on the other node in the cluster and upgrade the operating
  467. system on that node.
  468. The node automatically rejoins the existing cluster.
  469. >>> To upgrade directly from Windows NT 4.0 to Whistler When Using a
  470. SCSI Bus:
  471. 1. Review the appropriate instructions for making sure that the SCSI
  472. bus is terminated or for putting Y-cables or TriLink cables in
  473. place. These instructions are in Cluster Administrator Help in
  474. Windows NT Server 4.0, Enterprise Edition, in the Index under
  475. "nodes, disconnecting." If you have used an alternative set of
  476. instructions from the Windows NT Server 4.0, Enterprise Edition
  477. CD, in \Support\Books\Mscsadm5.doc, review these instructions. You
  478. will carry out the instructions in a later step.
  479. 2. As appropriate, notify users that you will be shutting down the
  480. applications they use on the cluster.
  481. 3. Stop the applications that are made available through the cluster.
  482. 4. To stop Cluster service on all nodes but one, in Cluster
  483. Administrator, click each node you want to stop, and then on the
  484. File menu, click Stop Cluster Service.
  485. 5. On Node 1, follow the appropriate instructions to make sure the
  486. SCSI bus is terminated, or that Y-cables or TriLink cables are
  487. in place.
  488. 6. Shut down and turn off all nodes but one, or bring them to a
  489. shut-down state appropriate to your method of termination.
  490. Caution: Be sure that only one node is running before continuing. This
  491. prevents corruption of the cluster storage.
  492. 7. Upgrade the operating system on the node that is running. For
  493. general information about Setup, review the sections earlier in
  494. this text file series.
  495. 8. The cluster software will be upgraded automatically during the
  496. operating system upgrade. Note that you cannot make configuration
  497. changes such as configuring cluster disks as dynamic disks. For
  498. more information about the limitations of server clusters, see
  499. Whistler Help and Support Services.
  500. 9. On the node that is running, click Start, point to Programs, point
  501. to Administrative Tools, and then click Cluster Administrator.
  502. 10. Check to see that the cluster disks are online in Cluster
  503. Administrator.
  504. Caution: Be sure that that the cluster disks are online in Cluster
  505. Administrator before continuing to the next step. When the disks are
  506. online, it means that Cluster service is working, which means that
  507. only one node can access the cluster storage at any given time. This
  508. prevents corruption of the cluster storage.
  509. 11. Turn on the other node in the cluster and upgrade the operating
  510. system on that node.
  511. The node automatically rejoins the existing cluster.
  512. Important: If your cluster storage uses SCSI, you can have two nodes
  513. in the cluster, but no more. If you want to have more than two nodes
  514. in the cluster, you must use Fibre Channel for the cluster storage.
  515. ======================================================================
  516. 4.0 Installation on Cluster Nodes
  517. ======================================================================
  518. The following sections provide important information about how to
  519. prepare for cluster installation, begin hardware installation for a
  520. cluster, and start Setup on the first cluster node.
  521. 4.1 Planning and Preparing for Cluster Installation
  522. ======================================================================
  523. Before carrying out cluster installation, you will need to plan
  524. hardware and network details.
  525. Caution: Make sure that Advanced Server and Cluster service are
  526. installed and running on one node before starting the operating system
  527. on another node. If the operating system is started on multiple nodes
  528. before Cluster service is running on one node, the cluster storage
  529. could be corrupted. Once Cluster service is running properly on one
  530. node, the other nodes can be installed and configured simultaneously.
  531. Each node of your cluster must be running Advanced Server.
  532. In your planning, review the following items:
  533. * Cluster hardware and drivers.
  534. Check that your hardware, including your cluster storage and other
  535. cluster hardware, is compatible with Whistler Advanced Server. To
  536. check this, see the Hardware Compatibility List (HCL) on the
  537. Whistler CD, in the Support folder, in Hcl.txt. For the most
  538. up-to-date list of supported hardware, see the Hardware
  539. Compatibility List by visiting the Microsoft Web site at:
  540. http://www.microsoft.com/
  541. You must have a separate PCI storage host adapter (SCSI or Fibre
  542. Channel) for the shared disks. This is in addition to the bootdisk
  543. adapter.
  544. Also check that you have the drivers you need in order to use the
  545. cluster storage hardware with Whistler Advanced Server. (Drivers
  546. are available from your hardware manufacturer.)
  547. Review the manufacturer's instructions carefully before you begin
  548. installing cluster hardware. Otherwise, the cluster storage could
  549. be corrupted. If your cluster hardware includes a SCSI bus, be sure
  550. to review carefully any instructions about termination of the SCSI
  551. bus and configuration of SCSI IDs.
  552. To simplify configuration and eliminate potential compatibility
  553. problems, consider using identical hardware for all nodes.
  554. * Network adapters on the cluster nodes.
  555. In your planning, decide what kind of communication each network
  556. adapter will carry.
  557. Note: To reduce the risks with having a single point of failure,
  558. plan on having two or more network adapters in each cluster node,
  559. and connecting each adapter to a physically separate network. The
  560. adapters on a given node must connect to networks using different
  561. subnet masks.
  562. The following table shows recommended ways of connecting network
  563. adapters:
  564. ADAPTERS
  565. PER NODE RECOMMENDED USE
  566. -------- -----------------------------------------------------------
  567. 2 One private network (node-to-node only), plus
  568. one mixed network (node-to-node plus client-to-cluster).
  569. 3 Two private networks (node-to-node), plus
  570. one public network (client-to-cluster).
  571. With this configuration, the adapters using the private
  572. network must use static IP addresses (not DHCP).
  573. or
  574. One private network (node-to-node), plus
  575. one public network (client-to-cluster), plus
  576. one mixed network (node-to-node plus client-to-cluster).
  577. The following list provides more details about the types of
  578. communication that an adapter can carry:
  579. * Only node-to-node communication (private network).
  580. This implies that the server has one or more additional adapters to
  581. carry other communication.
  582. For node-to-node communication, you will connect the network
  583. adapter to a private network used exclusively within the
  584. cluster. Note that if the private network uses a single hub or
  585. network switch, that piece of equipment becomes a potential
  586. point of failure in your cluster.
  587. The nodes of a cluster must be on the same subnet, but you can use
  588. virtual LAN (VLAN) switches on the interconnects between two
  589. nodes. If you use a VLAN, the point-to-point, round-trip latency
  590. must be less than 1/2 second and the link between two nodes must
  591. appear as a single point-to-point connection from the perspective
  592. of the operating system. To avoid single points of failure, use
  593. independent VLAN hardware for the different paths between the
  594. nodes.
  595. If your nodes use multiple private (node-to-node) networks, the
  596. adapters for those networks must use static IP addresses (not
  597. DHCP).
  598. * Only client-to-cluster communication (public network).
  599. This implies that the server has one or more additional adapters to
  600. carry other communication.
  601. * Both node-to-node and client-to-cluster communication (mixed
  602. network).
  603. If you have only one network adapter per node, it must
  604. carry both these kinds of communication. If you have multiple
  605. network adapters per node, a network adapter that carries both
  606. kinds of communication can provide backup for other network
  607. adapters.
  608. * Communication unrelated to the cluster.
  609. If a clustered node also provides services unrelated to the
  610. cluster, and there are enough adapters in the cluster node, you
  611. might want to use one adapter for carrying communication unrelated
  612. to the cluster.
  613. Consider choosing a name for each connection that describes its
  614. purpose. The name will make it easier to identify the connection
  615. whenever you are configuring the server.
  616. * Cluster IP address.
  617. Obtain a static IP address for the cluster itself. You cannot use
  618. DHCP for this address.
  619. * IP addressing for cluster nodes.
  620. Determine how to handle the IP addressing for the cluster nodes.
  621. Each network adapter on each node will need IP addressing. You
  622. can provide IP addressing through DHCP, or you can assign each
  623. network adapter a static IP address. If you use static IP
  624. addresses, the addresses for each linked pair of network adapters
  625. (linked node-to-node) should be on the same subnet.
  626. Note: if you use DHCP for the cluster nodes, it can act as a
  627. single point of failure. That is, if you set up your cluster nodes
  628. so that they depend on a DHCP server for their IP addresses,
  629. temporary failure of the DHCP server can mean temporary
  630. unavailability of the cluster nodes. When deciding whether to use
  631. DHCP, evaluate ways to ensure availability of DHCP services, and
  632. consider the possibility of using long leases for the cluster
  633. nodes. This will help ensure that they always have a valid IP
  634. address.
  635. * Cluster name.
  636. Determine or obtain an appropriate name for the cluster. This
  637. is the name administrators will use for connections to the cluster.
  638. (The actual applications running on the cluster will typically have
  639. different network names.) The cluster name must be different from
  640. the domain name, from all computer names on the domain, and from
  641. other cluster names on the domain.
  642. * Computer accounts and domain assignment for cluster nodes.
  643. Make sure that the cluster nodes all have computer accounts in
  644. the same domain. Cluster nodes cannot be in a workgroup.
  645. * Operator user account for installing and configuring the Cluster
  646. service.
  647. To install and configure Cluster service, you must log on to
  648. each node with an account that has administrative privileges on
  649. those nodes.
  650. * Cluster service user account.
  651. Create or obtain the Cluster service user account. This is the
  652. name and password under which Cluster service will run. You
  653. will need to supply this user name and password during cluster
  654. installation.
  655. The Cluster service user account should be a new account. The
  656. account must be a domain account; it cannot be a local account. The
  657. account also must have local administrative privileges on all of
  658. the cluster nodes. Be sure to keep the password from expiring on
  659. the account (follow your organization's policies for password
  660. renewal).
  661. * Volume for important cluster configuration information (checkpoint
  662. and log files).
  663. You need to plan to set aside a volume on your cluster storage
  664. for holding important cluster configuration information. This
  665. information makes up the quorum resource of the cluster, needed
  666. when a cluster node stops functioning. The quorum resource provides
  667. node-independent storage of crucial data needed by the cluster.
  668. The recommended minimum size for the volume is 500 MB. You should use a different volume for the quorum resource than you use for user data.
  669. * List of storage devices or disks attached to the first server on
  670. which you will install clustering.
  671. Unless the first server on which you will install clustering has
  672. relatively few storage devices or disks attached to it, you should
  673. make a list that identifies the ones intended for cluster storage.
  674. This makes it easy to choose storage devices or disks correctly
  675. during cluster configuration.
  676. Note: When planning and carrying out disk configuration for the
  677. cluster disks, configure them as basic disks with all partitions
  678. formatted as NTFS. Do not configure them as dynamic disks, and do
  679. not use Encrypting File System, volume mount points, spanned
  680. volumes (volume sets), or Remote Storage on the cluster disks.
  681. The following section describes the physical installation of the
  682. cluster storage.
  683. 4.2 Beginning the Installation of the Cluster Hardware
  684. ======================================================================
  685. The steps you carry out when first physically connecting and
  686. installing the cluster hardware are crucial. Be sure to follow the
  687. hardware manufacturer's instructions for these initial steps.
  688. Important: Carefully review your network cables after connecting them.
  689. Make sure no cables are crossed by mistake (for example, private
  690. network connected to public).
  691. Caution: When you first attach your cluster hardware (the shared bus
  692. and cluster storage), be sure to work only from the firmware
  693. configuration screens on the cluster nodes (a node is a server in a
  694. cluster). On a 32-bit computer, use the BIOS configuration screens. On
  695. a 64-bit computer, use the Extensible Firmware Interface (EFI)
  696. configuration screens. The instructions from your manufacturer will
  697. describe whether these configuration screens are displayed
  698. automatically or whether you must, after turning on the computer,
  699. press specific keys to open them. Follow the manufacturer's
  700. instructions for completing the BIOS or EFI configuration process.
  701. Remain in the BIOS or EFI, and do not allow the operating system to
  702. start during this initial installation phase.
  703. 4.2.1 Steps to Carry Out in the BIOS or EFI
  704. ----------------------------------------------------------------------
  705. Complete the following steps while the cluster nodes are still
  706. displaying BIOS or EFI configuration screens before starting the
  707. operating system on the first cluster node.
  708. * If you have a SCSI bus, make sure you understand and follow the
  709. manufacturer's instructions for termination of the SCSI bus.
  710. * If you have a SCSI bus, make sure that each device on the shared
  711. bus (both SCSI controllers and hard disks) has a unique SCSI ID.
  712. If the SCSI controllers all default to the same ID (often it is
  713. SCSI ID 7), change one controller to a different SCSI ID such
  714. as SCSI ID 6. If there is more than one disk that will be on the
  715. shared SCSI bus, each disk must also have a unique SCSI ID. In
  716. addition, make sure that the bus is not configured to reset SCSI
  717. IDs automatically during startup (otherwise the IDs will change
  718. from the settings you specify).
  719. * Ensure that you can scan the bus and see the drives from both
  720. cluster nodes (while remaining in the BIOS or EFI configuration
  721. screens).
  722. 4.3 Completing the Installation
  723. ======================================================================
  724. After the BIOS or EFI configuration is completed, start the operating
  725. system on one cluster node only, and carry out the installation of
  726. Cluster service. Before starting the operating system on another node,
  727. make sure that Whistler Advanced Server and Cluster service are
  728. installed and running on that node. If the operating system is started
  729. on multiple nodes before Cluster service is running on one node, the
  730. cluster storage could be corrupted.
  731. 4.4 Installation on the First Cluster Node
  732. ======================================================================
  733. It is important that you work on one node (never two nodes) when you
  734. exit the BIOS or EFI configuration screens and allow the operating
  735. system to start for the first time.
  736. Caution: Make sure that Whistler Advanced Server and Cluster service
  737. Are installed and running on one node before starting the operating
  738. system on another node. If the operating system is started on multiple
  739. nodes before Cluster service is running on one node, the cluster
  740. storage could be corrupted.
  741. 4.4.1 Completing the Installation on the First Cluster Node
  742. ----------------------------------------------------------------------
  743. If you have not already installed Whistler Advanced Server on the
  744. first cluster node, install it now. For information about decisions
  745. you must make, such as decisions about licensing and about the
  746. components to install, see the sections earlier in this text file
  747. series.
  748. When Whistler Advanced Server is installed, use the following
  749. procedure to obtain specific information about how to complete the
  750. installation of the cluster.
  751. >>> To obtain additional information about how to install and
  752. configure Cluster service:
  753. 1. With Whistler Advanced Server running on one cluster node, click
  754. Start, and then click Help and Support.
  755. 2. Click Enterprise Technologies, and then click Windows Clustering.
  756. 3. Click Server Clusters.
  757. 4. Click Checklists: Creating Server Clusters, and then
  758. click Checklist: Creating a server cluster.
  759. 5. Use the checklist to guide you through the process of completing
  760. the installation of your server cluster.