Source code of Windows XP (NT5)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

776 lines
33 KiB

  1. **********************************************************************
  2. Upgrading and Installing on Cluster Nodes
  3. Release Notes, Part 4 of 4
  4. Beta 2
  5. **********************************************************************
  6. (c) 2001 Microsoft Corporation. All rights reserved.
  7. These notes support a preliminary release of a software program that
  8. bears the project code name Whistler.
  9. With Whistler Datacenter Server, you can use clustering to ensure
  10. that users have constant access to important server-based resources.
  11. With clustering, you create several cluster nodes that appear to
  12. users as one server. If one of the nodes in the cluster fails, another
  13. node begins to provide service (a process known as failover).
  14. Mission critical applications and resources remain continuously
  15. available.
  16. Sections to read if you are upgrading:
  17. 1.0 Upgrading or Installing Clustering
  18. 1.2 Options for Upgrading or Installing Clustering
  19. 2.0 Upgrading a Cluster from Windows 2000 to Whistler
  20. 2.1 How Rolling Upgrades Work
  21. 2.2 Restrictions on Rolling Upgrades
  22. 2.3 Resource Behavior During Rolling Upgrades
  23. 2.4 Alternatives to Rolling Upgrades from Windows 2000
  24. Sections to read if you are performing a new installation:
  25. 1.0 Upgrading or Installing Clustering
  26. 1.2 Options for Upgrading or Installing Clustering
  27. 3.0 Installation on Cluster Nodes
  28. ======================================================================
  29. 1.0 Upgrading or Installing Clustering
  30. ======================================================================
  31. Before installing or upgrading clustering, you should familiarize
  32. yourself with the basic preparations needed and the options available
  33. for upgrading and installing. The following sections provide
  34. information on these topics.
  35. 1.1 Preparing for Upgrading or Installing Clustering
  36. ======================================================================
  37. To prepare for installing or upgrading clustering, review the
  38. sections earlier in this text file series. As described in those
  39. sections, check the Hardware Compatibility List to ensure that all
  40. your hardware (including your cluster storage) is compatible with
  41. Whistler Datacenter Server. In addition, check with the manufacturer
  42. of your cluster storage hardware to be sure you have the drivers you
  43. need in order to use the hardware with Whistler Datacenter Server.
  44. 1.2 Options for Upgrading or Installing Clustering
  45. ======================================================================
  46. When installing or upgrading clustering, you can choose among several options. You can:
  47. * Upgrade a cluster that is running Windows 2000, possibly
  48. through a rolling upgrade. For more information, see "How
  49. Rolling Upgrades Work" and "Restrictions on Rolling Upgrades"
  50. later in this text file.
  51. * Perform a new installation of Whistler Datacenter Server and
  52. install Cluster service at the same time. For important
  53. information about preparing for cluster installation,
  54. see "Installation on Cluster Nodes" later in this text file.
  55. Note: For cluster disks, you must use the NTFS file system and
  56. configure the disks as basic disks. You cannot configure cluster disks
  57. as dynamic disks, and you cannot use features of dynamic disks such as
  58. spanned volumes (volume sets). For more information about the
  59. limitations of server clusters, see Whistler Help and Support
  60. Services. To open Help and Support Services, after completing Setup,
  61. click Start, and then click Help.
  62. For more information about reinstalling clustering on one of the
  63. cluster nodes, see Whistler Help and Support Services.
  64. ======================================================================
  65. 2.0 Upgrading a Cluster from Windows 2000 to Whistler
  66. ======================================================================
  67. If you are upgrading from Windows 2000 to Whistler on cluster nodes,
  68. you might be able to perform a rolling upgrade of the operating
  69. system. In a rolling upgrade, you sequentially upgrade the operating
  70. system on each node, making sure that one node is always available to
  71. handle client requests. When you upgrade the operating system, the
  72. Cluster service is automatically upgraded also. A rolling upgrade
  73. maximizes availability of clustered services and minimizes
  74. administrative complexity. For more information, see the following
  75. section, "How Rolling Upgrades Work."
  76. To determine whether you can perform a rolling upgrade and understand
  77. the effect that a rolling upgrade might have on your clustered
  78. resources, see "Restrictions on Rolling Upgrades" later in this text
  79. file series. For information about ways to upgrade your cluster nodes
  80. if you cannot perform a rolling upgrade, see "Alternatives to Rolling
  81. Upgrades from Windows 2000" later in this text file series.
  82. 2.1 How Rolling Upgrades Work
  83. ======================================================================
  84. This section describes rolling upgrades on server clusters. For
  85. information about methods, restrictions, and alternatives to rolling
  86. upgrades, see the following sections.
  87. There are two major advantages to a rolling upgrade. First, there is
  88. a minimal interruption of service to clients. (However, server
  89. response time might decrease during the phases in which one node
  90. handles the work of the entire cluster.) Second, you do not have to
  91. recreate your cluster configuration. The configuration remains intact
  92. during the upgrade process.
  93. A rolling upgrade starts with two cluster nodes that are running
  94. Windows 2000. In this example, they are named Node 1 and Node 2.
  95. Phase 1: Preliminary
  96. Each node runs Windows 2000 Datacenter Server with the following
  97. hardware and software:
  98. * A cluster storage unit using Fibre Channel, not SCSI. Fibre
  99. Channel is the only type of cluster storage on the Hardware
  100. Compatibility List for Datacenter Server. (Note that SCSI can be
  101. used for a two-node cluster with Advanced Server, not Datacenter
  102. Server.)
  103. * The Cluster service component (one of the optional components of
  104. Windows 2000 Datacenter Server).
  105. * Applications that support a rolling upgrade. For more information,
  106. see the product documentation and "Resource Behavior During
  107. Rolling Upgrades" later in this text file.
  108. At this point, your cluster is configured so that each node handles
  109. client requests (an active/active configuration).
  110. Phase 2: Upgrade Node 1
  111. Node 1 is paused, and Node 2 handles all cluster resource groups while
  112. you upgrade the operating system of Node 1 to Whistler Datacenter
  113. Server.
  114. Phase 3: Upgrade Node 2
  115. Node 1 rejoins the cluster. Node 2 is paused and Node 1 handles all
  116. cluster resource groups while you upgrade the operating system on
  117. Node 2.
  118. Phase 4: Final
  119. Node 2 rejoins the cluster, and you redistribute the resource groups
  120. back to the active/active cluster configuration.
  121. Important: For cluster disks, you must use the NTFS file system and
  122. configure the disks as basic disks. You cannot configure cluster disks
  123. as dynamic disks, and you cannot use features of dynamic disks such as
  124. spanned volumes (volume sets).
  125. 2.1.1 Performing a Rolling Upgrade
  126. ----------------------------------------------------------------------
  127. For an outline of the rolling upgrade process, see the preceding
  128. section, "How Rolling Upgrades Work."
  129. Important: For information about what resources are supported during
  130. rolling upgrades, see "Restrictions on Rolling Upgrades" and "Resource
  131. Behavior During Rolling Upgrades" later in this text file series.
  132. >>> To perform a rolling upgrade:
  133. 1. In Cluster Administrator, click the node that you want to upgrade
  134. first.
  135. 2. On the File menu, click Pause Node.
  136. 3. In the right pane, double-click Active Groups.
  137. 4. In the right pane, click a group, and then on the File menu, click
  138. Move Group. Repeat this step for each group listed.
  139. The services will be interrupted during the time they are being
  140. moved and restarted on another node. After the groups are moved,
  141. one node is idle, and the other nodes handle all client
  142. requests.
  143. 5. Use Whistler Datacenter Server Setup to upgrade the paused node
  144. from Windows 2000. For information about running Setup, see
  145. sections earlier in this text file series.
  146. Setup detects the earlier version of clustering on the paused node
  147. and automatically installs clustering for Whistler Datacenter
  148. Server. The node automatically rejoins the cluster at the end of
  149. the upgrade process, but is still paused and does not handle any
  150. cluster-related work.
  151. 6. To verify that the node that was upgraded is fully functional,
  152. perform validation tests on it.
  153. 7. In Cluster Administrator, click the node that was paused, and then
  154. on the file menu, click Resume Node.
  155. 8. Repeat the preceding steps for any remaining node or nodes.
  156. 2.2 Restrictions on Rolling Upgrades
  157. ======================================================================
  158. There are several basic restrictions to the rolling-upgrade process.
  159. They involve the beginning of Phase 3, in which you operate a mixed-version cluster: a cluster in which the nodes run different versions
  160. of the operating system. For a mixed-version cluster to work, the
  161. different versions of the software running on each node must be
  162. prepared to communicate with one another. This requirement leads to
  163. several basic restrictions on the rolling-upgrade process.
  164. * For a successful rolling upgrade, every resource that the cluster
  165. manages must be capable of a rolling upgrade. For more
  166. information, see "Resource Behavior During Rolling Upgrades"
  167. later in this text file.
  168. * During the mixed-version phase of a rolling upgrade, when the
  169. cluster nodes are running different versions of the operating
  170. system, do not change the settings of resources (for example, do
  171. not change the settings of a printer resource).
  172. If preceding restrictions cannot be met, do not perform a rolling
  173. upgrade. For more information, see "Alternatives to Rolling Upgrades
  174. from Windows 2000" later in this text file.
  175. 2.2.1 Operation of New Resource Types in Mixed-Version Clusters
  176. ----------------------------------------------------------------------
  177. If a resource type that you add to the cluster is supported in one
  178. version of the operating system but not in the other, the operation of
  179. a mixed-version cluster is complicated. For example, Cluster service
  180. in Whistler (part of the Advanced Server and Datacenter Server
  181. products) supports the Generic Script resource type. However, older
  182. versions of Cluster service do not support it. A mixed-version
  183. cluster can run a Generic Script resource on a node running Whistler
  184. but not on a node running Windows 2000.
  185. Cluster service transparently sets the possible owners of new resource
  186. types to prevent these resources from failing over to a Windows 2000
  187. node of a mixed-version cluster. In other words, when you view the
  188. possible owners of a new resource type, a Windows 2000 node will not
  189. be in the list, and you will not be able to add this node to the list.
  190. If you create such a resource during the mixed-version phase of a
  191. rolling upgrade, the resource groups containing those resources will
  192. not fail over to a Windows 2000 node.
  193. 2.3 Resource Behavior During Rolling Upgrades
  194. ======================================================================
  195. Although Cluster service supports rolling upgrades, not all
  196. applications have seamless rolling-upgrade behavior. The following
  197. table describes which resources will be supported during a rolling
  198. upgrade. If you have a resource that is not fully supported during
  199. rolling upgrades, see "Alternatives to Rolling Upgrades from
  200. Windows 2000" later in this text file.
  201. RESOURCE ROLLING UPGRADE NOTES
  202. -------------- ------------------------------------------------
  203. DHCP Supported during rolling upgrades.
  204. File Share Supported during rolling upgrades.
  205. IP Address Supported during rolling upgrades.
  206. Network Name Supported during rolling upgrades.
  207. NNTP Supported during rolling upgrades.
  208. Physical Disk Supported during rolling upgrades
  209. Time Service Supported during rolling upgrades.
  210. SMTP Supported during rolling upgrades.
  211. WINS Supported during rolling upgrades.
  212. Print Spooler The only Print Spooler resources supported
  213. during a rolling upgrade are those on LPR ports
  214. or standard monitor ports. See the following
  215. section, "Upgrades that Include a Print Spooler
  216. Resource."
  217. IIS Internet Information Server (IIS) 6.0 is not
  218. supported during rolling upgrades. For more
  219. information, see "Upgrades the include an IIS
  220. resource" later in this text file.
  221. MS DTC Microsoft Distributed Transaction
  222. Coordinator is not supported during a rolling
  223. upgrade. However, you can perform a process
  224. similar to rolling upgrades. See "Upgrades that
  225. Include an MS DTC Resource" later in this text file.
  226. MSMQ Microsoft Message Queuing is not supported
  227. during a rolling upgrade. To upgrade a cluster
  228. which includes MSMQ, see "Upgrades that Include
  229. an MSMQ Resource" later in this text file.
  230. Other resource See Readme.doc in the root
  231. types directory of the Whistler Datacenter Server CD.
  232. Also see the product documentation that comes
  233. with the application or resource.
  234. 2.3.1 Upgrades that Include a Print Spooler Resource
  235. ----------------------------------------------------------------------
  236. If you want to perform a rolling upgrade of a cluster that has a
  237. Print Spooler resource, you must consider two issues.
  238. First, the Print Spooler resource only supports upgrades (including
  239. rolling upgrades or any other kind of upgrade) on printers on
  240. cluster-supported ports (LPR or Standard Monitor ports). For
  241. information about what to do if your printer is not supported, see
  242. "Alternatives to Rolling Upgrades from Windows 2000" later in this
  243. text file series.
  244. Second, when you operate a mixed-version cluster including a Print
  245. Spooler resource, note the following:
  246. * Do not change printer settings in a mixed-version cluster with a
  247. Print Spooler resource.
  248. * If you add a new printer, when you install the drivers for that
  249. printer, be sure to install both the driver for Windows 2000 and
  250. the driver for Whistler on all nodes.
  251. * If printing preferences or defaults are important, be sure to
  252. check them. Printing preferences in Whistler won't necessarily
  253. correspond to document defaults for the same printer in Windows
  254. 2000. This can be affected by differences in the drivers for the
  255. two operating systems.
  256. When the rolling upgrade is complete and both cluster nodes are
  257. running the updated operating system, you can make any modifications
  258. you choose to your printer configuration.
  259. 2.4 Alternatives to Rolling Upgrades from Windows 2000
  260. ======================================================================
  261. Certain resources are not supported during rolling upgrades,
  262. including:
  263. * Internet Information Server (IIS)
  264. * Microsoft Data Transaction Coordinator (MS DTC)
  265. * Microsoft Message Queuing (MSMQ)
  266. Special procedures, described below, must be followed when performing
  267. an upgrade of a cluster that contains these resources. In addition to
  268. the three resource types above, you might also have other resources that are not supported during rolling upgrades. Be sure to read
  269. Readme.doc in the root directory of the Whistler CD, as well as the
  270. product documentation that comes with the application or resource.
  271. 2.4.1 Upgrades that Include an IIS Resource
  272. ----------------------------------------------------------------------
  273. IIS 6.0 is not supported during rolling upgrades. With earlier
  274. versions of IIS, you could configure an individual Web site to fail
  275. over as a cluster resource. However with IIS 6.0, the entire IIS
  276. service must fail over, not individual Web sites. If you have
  277. individual Web sites or the IIS service configured as a cluster
  278. resource, you must use the following procedure to upgrade to Whistler.
  279. >>> To upgrade from Windows 2000 on a cluster that includes an IIS
  280. resource:
  281. 1. Remove any individual Web sites that you have configured as
  282. cluster resources from their cluster group. You can no longer
  283. designate a single site as a cluster resource.
  284. 2. If you have the IIS service configured as a cluster resource, take
  285. this resource offline. To take the resource offline, follow the
  286. procedures described in "Upgrades for Other Non-Supported
  287. Resources" later in this text file.
  288. 3. Perform a rolling upgrade, as described in the procedure "To
  289. perform a rolling upgrade" earlier in this text file.
  290. 4. Once you have completed the upgrade, you can bring the IIS service
  291. back online.
  292. Important: With IIS 6.0, you can only configure the IIS service as a
  293. cluster resource. You cannot configure individual Web sites as cluster
  294. resources.
  295. 2.4.2 Upgrades that Include an MS DTC Resource
  296. ----------------------------------------------------------------------
  297. Microsoft Distributed Transaction Coordinator (MS DTC) is not
  298. Supported during rolling upgrades. However, you can perform a process
  299. similar to a rolling upgrade.
  300. >>> To upgrade from Windows 2000 on a cluster that includes an MS DTC
  301. resource:
  302. 1. Take the MS DTC resource offline by using the Cluster Administrator
  303. and clicking the Resources folder. In the details pane, click the
  304. MS DTC resource, then on the File menu, click Take Offline.
  305. Caution: Taking a resource offline causes all resources that depend
  306. on that resource to be taken offline.
  307. 2. Configure the MS DTC resource so that the only allowable owner
  308. is the node it is currently on by using the Cluster
  309. Administrator and clicking the Resource folder. In the details
  310. pane, click the MS DTC resource. On the File menu, click
  311. Properties. On the General tab, next to Possible owners, click
  312. Modify. Specify Node 2 as an Available node, and if necessary,
  313. remove Node 1 from the Available nodes list.
  314. 3. Upgrade a node that does not contain the MS DTC resource from
  315. Windows 2000 to Whistler. For general information about Setup,
  316. review the sections earlier in this text file series.
  317. 4. Move the MS DTC resource to the upgraded nodes, following the
  318. procedures as described in step 1.
  319. 5. Configure the MS DTC resource so that the only allowable owner
  320. is the upgraded node, following the procedures as described in
  321. step 2.
  322. 6. Upgrade the remaining nodes from Windows 2000 to Whistler.
  323. 7. Configure the allowable owners for the MS DTC resource as
  324. appropriate for your configuration.
  325. 8. Manually restart all dependent services, and then bring the MS DTC
  326. resource back online by using the Cluster Administrator
  327. and clicking the Resources folder. In the details pane, click
  328. the MS DTC resource, and then on the File menu, click Bring Online.
  329. 2.4.3 Upgrades That Include an MSMQ Resource
  330. ----------------------------------------------------------------------
  331. Microsoft Message Queuing (MSMQ) does not support rolling upgrades.
  332. The MSMQ resource is dependent on the MS DTC resource, so be sure to
  333. follow the steps outlined in the preceding section "Upgrades that
  334. Include an MS DTC Resource."
  335. >>> To upgrade from Windows 2000 on a cluster that includes an MSMQ
  336. resource:
  337. 1. Upgrade the operating system of the nodes to Whistler.
  338. 2. Click Start, point to Programs, point to Administrative Tools, and
  339. then click Configure Your Server.
  340. 3. In Configure Your Server, click Finish Setup, and then click
  341. Configure Message Queuing Cluster Resources.
  342. 4. Follow the instructions that appear in the Configure Message
  343. Queuing Cluster Resources Wizard.
  344. 2.4.4 Upgrades for Other Non-Supported Resources
  345. ----------------------------------------------------------------------
  346. If you have other resources on your cluster that are not supported
  347. during a rolling upgrade, but are not described above, take those
  348. resources offline prior to performing the rolling upgrade.
  349. >>> To take a resource offline and perform a rolling upgrade:
  350. 1. Confirm that your systems are running Windows 2000.
  351. 2. Using the information in "Resource Behavior During Rolling
  352. Upgrades" earlier in this text files, list the resources
  353. in your cluster that are not supported during rolling upgrades.
  354. 3. In Cluster Administrator, click the Resources folder.
  355. 4. In the right pane, click the resource you want.
  356. 5. On the File menu, click Take Offline.
  357. 6. Repeat the preceding steps until you have taken offline all
  358. resources that do not support rolling upgrades.
  359. 7. Perform a rolling upgrade, as described in the procedure "To
  360. perform a rolling upgrade" earlier in this text file series.
  361. 8. For each resource that you listed in step 2, follow the
  362. product's instructions for installing or reconfiguring the
  363. application so that it will run with Whistler.
  364. ======================================================================
  365. 3.0 Installation on Cluster Nodes
  366. ======================================================================
  367. The following sections provide important information about how to
  368. prepare for cluster installation, begin hardware installation for a
  369. cluster, and start Setup on the first cluster node.
  370. 3.1 Planning and Preparing for Cluster Installation
  371. ======================================================================
  372. Before carrying out cluster installation, you will need to plan
  373. hardware and network details.
  374. Caution: Make sure that Datacenter Server and Cluster service are
  375. installed and running on one node before starting the operating system
  376. on another node. If the operating system is started on multiple nodes
  377. before Cluster service is running on one node, the cluster storage
  378. could be corrupted. Once Cluster service is running properly on one
  379. node, the other nodes can be installed and configured simultaneously.
  380. Each node of your cluster must be running Datacenter Server.
  381. In your planning, review the following items:
  382. * Cluster hardware and drivers.
  383. Check that your hardware, including your cluster storage and
  384. other cluster hardware, is compatible with Whistler Datacenter
  385. Server. To check this, see the Hardware Compatibility List (HCL) on
  386. The Whistler CD, in the Support folder, in Hcl.txt. For the most
  387. up-to-date list of supported hardware, see the Hardware
  388. Compatibility List by visiting the Microsoft Web site at:
  389. http://www.microsoft.com/
  390. You must have a separate PCI storage host adapter (SCSI or Fiber
  391. Channel) for the shared disks. This is in addition to the bootdisk
  392. adapter.
  393. Also check that you have the drivers you need in order to use the
  394. cluster storage hardware with Whistler Datacenter Server. (Drivers
  395. are available from your hardware manufacturer.)
  396. Review the manufacturer's instructions carefully before you begin
  397. installing cluster hardware. Otherwise, the cluster storage could
  398. be corrupted.
  399. To simplify configuration and eliminate potential compatibility
  400. problems, consider using identical hardware for all nodes.
  401. * Network adapters on the cluster nodes.
  402. In your planning, decide what kind of communication each network
  403. adapter will carry.
  404. Note: To reduce the risks with having a single point of failure,
  405. plan on having two or more network adapters in each cluster node,
  406. and connecting each adapter to a physically separate network. The
  407. adapters on a given node must connect to networks using different
  408. subnet masks.
  409. The following table shows recommended ways of connecting network
  410. adapters:
  411. ADAPTERS
  412. PER NODE RECOMMENDED USE
  413. -------- --------------------------------------------------------
  414. 2 One private network (node-to-node only), plus
  415. one mixed network (node-to-node plus client-to-cluster).
  416. 3 Two private networks (node-to-node), plus
  417. one public network (client-to-cluster).
  418. With this configuration, the adapters using the private
  419. network must use static IP addresses (not DHCP).
  420. or
  421. One private network (node-to-node), plus
  422. one public network (client-to-cluster), plus
  423. one mixed network (node-to-node plus client-to-cluster).
  424. The following list provides more details about the types of
  425. communication that an adapter can carry:
  426. * Only node-to-node communication (private network).
  427. This implies that the server has one or more additional adapters to
  428. carry other communication.
  429. For node-to-node communication, you will connect the network
  430. adapter to a private network used exclusively within the
  431. cluster. Note that if the private network uses a single hub or
  432. network switch, that piece of equipment becomes a potential
  433. point of failure in your cluster.
  434. The nodes of a cluster must be on the same subnet, but you can use
  435. virtual LAN (VLAN) switches on the interconnects between two
  436. nodes. If you use a VLAN, the point-to-point, round-trip latency
  437. must be less than 1/2 second and the link between two nodes must
  438. appear as a single point-to-point connection from the perspective
  439. of the operating system. To avoid single points of failure, use
  440. independent VLAN hardware for the different paths between the
  441. nodes.
  442. If your nodes use multiple private (node-to-node) networks, the
  443. adapters for those networks must use static IP addresses (not
  444. DHCP).
  445. * Only client-to-cluster communication (public network).
  446. This implies that the server has one or more additional adapters to
  447. carry other communication.
  448. * Both node-to-node and client-to-cluster communication (mixed
  449. network).
  450. If you have only one network adapter per node, it must
  451. carry both these kinds of communication. If you have multiple
  452. network adapters per node, a network adapter that carries both
  453. kinds of communication can provide backup for other network
  454. adapters.
  455. * Communication unrelated to the cluster.
  456. If a clustered node also provides services unrelated to the
  457. cluster, and there are enough adapters in the cluster node, you
  458. might want to use one adapter for carrying communication unrelated
  459. to the cluster.
  460. Consider choosing a name for each connection that describes its
  461. purpose. The name will make it easier to identify the connection
  462. whenever you are configuring the server.
  463. * Cluster IP address.
  464. Obtain a static IP address for the cluster itself. You cannot use
  465. DHCP for this address.
  466. * IP addressing for cluster nodes.
  467. Determine how to handle the IP addressing for the cluster nodes.
  468. Each network adapter on each node will need IP addressing. You can
  469. provide IP addressing through DHCP, or you can assign each network
  470. adapter a static IP address. If you use static IP addresses, the
  471. addresses for each linked pair of network adapters (linked
  472. node-to-node) should be on the same subnet.
  473. Note: If you use DHCP for the cluster nodes, it can act as a
  474. single point of failure. That is, if you set up your cluster nodes
  475. so that they depend on a DHCP server for their IP addresses,
  476. temporary failure of the DHCP server can mean temporary
  477. unavailability of the cluster nodes. When deciding whether to use
  478. DHCP, evaluate ways to ensure availability of DHCP services, and
  479. consider the possibility of using long leases for the cluster nodes.
  480. This will help ensure that they always have a valid IP address.
  481. * Cluster name.
  482. Determine or obtain an appropriate name for the cluster. This is
  483. the name administrators will use for connections to the cluster.
  484. (The actual applications running on the cluster will typically have
  485. different network names.) The cluster name must be different from
  486. the domain name, from all computer names on the domain, and from
  487. other cluster names on the domain.
  488. * Computer accounts and domain assignment for cluster nodes.
  489. Make sure that the cluster nodes all have computer accounts in the
  490. same domain. Cluster nodes cannot be in a workgroup.
  491. * Operator user account for installing and configuring the Cluster
  492. service.
  493. To install and configure Cluster service, you must log on to
  494. each node with an account that has administrative privileges on
  495. those nodes.
  496. * Cluster service user account.
  497. Create or obtain Cluster service user account. This is the
  498. name and password under which Cluster service will run. You
  499. will need to supply this user name and password during cluster
  500. installation.
  501. The Cluster service user account should be a new account. The
  502. account must be a domain account; it cannot be a local account. The
  503. account also must have local administrative privileges on all of the
  504. cluster nodes. Be sure to keep the password from expiring on the
  505. account (follow your organization's policies for password renewal).
  506. * Volume for important cluster configuration information (checkpoint
  507. and log files).
  508. You need to plan to set aside a volume on your cluster storage
  509. for holding important cluster configuration information. This
  510. information makes up the quorum resource of the cluster, needed
  511. when a cluster node stops functioning. The quorum resource provides
  512. node-independent storage of crucial data needed by the cluster.
  513. The recommended minimum size for the volume is 500 MB. You should
  514. use a different volume for the quorum resource than you use for user
  515. data.
  516. * List of storage devices or disks attached to the first server on
  517. which you will install clustering.
  518. Unless the first server on which you will install clustering has
  519. relatively few storage devices or disks attached to it, you should
  520. make a list that identifies the ones intended for cluster storage.
  521. This makes it easy to choose storage devices or disks correctly
  522. during cluster configuration.
  523. Note: When planning and carrying out disk configuration for the
  524. cluster disks, configure them as basic disks with all partitions
  525. formatted as NTFS. Do not configure them as dynamic disks, and do
  526. not use Encrypting File System, volume mount points, spanned volumes
  527. (volume sets), or Remote Storage on the cluster disks.
  528. The following section describes the physical installation of the
  529. cluster storage.
  530. 3.2 Beginning the Installation of the Cluster Hardware
  531. ======================================================================
  532. The steps you carry out when first physically connecting and
  533. installing the cluster hardware are crucial. Be sure to follow the
  534. hardware manufacturer's instructions for these initial steps.
  535. Important: Carefully review your network cables after connecting them.
  536. Make sure no cables are crossed by mistake (for example, private
  537. network connected to public).
  538. Caution: When you first attach your cluster hardware (the shared bus
  539. and cluster storage), be sure to work only from the firmware
  540. configuration screens on the cluster nodes (a node is a server in a
  541. cluster). On a 32-bit computer, use the BIOS configuration screens. On
  542. a 64-bit computer, use the Extensible Firmware Interface (EFI)
  543. configuration screens. The instructions from your manufacturer will
  544. describe whether these configuration screens are displayed
  545. automatically or whether you must, after turning on the computer,
  546. press specific keys to open them. Follow the manufacturer's
  547. instructions for completing the BIOS or EFI configuration process.
  548. Remain in the BIOS or EFI, and do not allow the operating system to
  549. start during this initial installation phase.
  550. 3.3 Completing the Installation
  551. ======================================================================
  552. After the BIOS or EFI configuration is completed, start the operating
  553. system on one cluster node only and carry out the installation of
  554. Cluster service. Before starting the operating system on another node,
  555. make sure that Whistler Datacenter Server and Cluster service are
  556. installed and running on that node. If the operating system is started
  557. on multiple nodes before Cluster service is running on one node, the
  558. cluster storage could be corrupted.
  559. While remaining in the BIOS or EFI configuration screens, ensure that
  560. you can scan the bus and see the drives from all cluster nodes.
  561. 3.4 Installation on the First Cluster Node
  562. ======================================================================
  563. It is important that you work on one node (never two nodes) when you
  564. exit the BIOS or EFI configuration screens and allow the operating
  565. system to start for the first time.
  566. Caution: Make sure that Whistler Datacenter Server and Cluster service
  567. are installed and running on one node before starting the operating
  568. system on another node. If the operating system is started on multiple
  569. nodes before Cluster service is running on one node, the cluster
  570. storage could be corrupted.
  571. 3.4.1 Completing the Installation on the First Cluster Node
  572. ----------------------------------------------------------------------
  573. If you have not already installed Whistler Datacenter Server on the
  574. first cluster node, install it now. For information about decisions
  575. you must make, such as decisions about licensing and about the
  576. components to install, see the sections earlier in this text file
  577. series.
  578. When Whistler Datacenter Server is installed, use the following
  579. procedure to obtain specific information about how to complete the
  580. installation of the cluster.
  581. >>> To obtain additional information about how to install and
  582. configure Cluster service:
  583. 1. With Whistler Datacenter Server running on one cluster node, click
  584. Start, and then click Help and Support.
  585. 2. Click Enterprise Technologies, and then click Windows Clustering.
  586. 3. Click Server Clusters.
  587. 4. Click Checklists: Creating Server Clusters, and then click
  588. Checklist: Creating a server cluster.
  589. 5. Use the checklist to guide you through the process of completing
  590. the installation of your server cluster.