Leaked source code of windows server 2003
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1006 lines
46 KiB

  1. **********************************************************************
  2. Windows Server 2003, Enterprise Edition
  3. Setup Text Files, Part 6 of 6:
  4. Upgrading and Installing on Cluster Nodes
  5. **********************************************************************
  6. This part of the text file series provides information about upgrading
  7. and installing on cluster nodes. With Microsoft Windows
  8. Server 2003, Enterprise Edition, and Microsoft Windows
  9. Server 2003, Datacenter Edition, you can use clustering to ensure that
  10. users have constant access to important server-based resources. With
  11. clustering, you create several cluster nodes that appear to users as
  12. one server. If one of the nodes in the cluster fails, another node
  13. begins to provide service (a process known as failover). Critical
  14. applications and resources remain continuously available.
  15. The following list of headings can help you find the information
  16. about server clusters that applies to you. For information about basic
  17. planning for an upgrade or a new installation, see EntSrv1.TXT,
  18. EntSrv2.TXT, and EntSrv3.TXT. For information about running
  19. Setup, see EntSrv4.TXT.
  20. In EntSrv5.TXT:
  21. ---------------
  22. 1.0 Preparing for Upgrading Clustering
  23. 2.0 Upgrading a Cluster from Windows 2000 to Windows
  24. Server 2003, Enterprise Edition
  25. 3.0 Upgrading a Cluster from Windows NT Server 4.0 to
  26. Windows Server 2003, Enterprise Edition
  27. 3.1 Upgrading from Windows NT Server 4.0 While Not
  28. Maintaining Cluster Availability
  29. In EntSrv6.TXT:
  30. ---------------
  31. Section 3 cont'd.
  32. 3.2 Upgrades from Windows NT 4.0 that Include an IIS Resource
  33. 4.0 Installing on Cluster Nodes
  34. 5.0 Beginning the Cluster Installation on the First Cluster
  35. Node
  36. ----------------------------------------------
  37. 3.2 UPGRADES FROM WINDOWS NT SERVER 4.0 THAT
  38. INCLUDE AN IIS RESOURCE
  39. ----------------------------------------------
  40. To upgrade a clustered IIS resource, you must replace the existing
  41. IIS resource with a Generic Script Resource. Be aware that the
  42. following procedure is only applicable when upgrading directly from
  43. Microsoft Windows NT Server 4.0 to Windows Server 2003,
  44. Enterprise Edition.
  45. To perform the following procedure, you must be a member of the
  46. Administrators group on the local computer. If the computer is joined
  47. to a domain, members of the Domain Admins group might be able to
  48. perform this procedure.
  49. >>>TO UPGRADE FROM WINDOWS NT SERVER 4.0 ON A CLUSTER THAT INCLUDES AN
  50. IIS RESOURCE
  51. 1. Confirm that your hardware is designed for or is compatible with
  52. Windows Server 2003, Enterprise Edition.
  53. 2. As appropriate, notify users that you will be shutting down the
  54. applications they use on the cluster.
  55. 3. Ensure that Service Pack 5 or later has been applied to all
  56. computers that will be upgraded from Windows NT Server 4.0 to
  57. Windows Server 2003, Enterprise Edition.
  58. 4. Stop the applications that are made available through
  59. the cluster.
  60. 5. Remove any resources that are not supported by Windows
  61. Server 2003, Enterprise Edition, including NNTP Service
  62. Instance, SMTP Service Instance and Time Service resources. Do
  63. this by using Cluster Administrator and clicking the Resource
  64. folder in the console tree. In the details pane, click the
  65. resource that you want to remove, then on the File menu,
  66. click Delete.
  67. 6. Set the Cluster service on all nodes to start manually.
  68. 7. Shut down and turn off the node that does not contain the IIS
  69. resource, or bring it to a shutdown state appropriate to your
  70. method of termination.
  71. CAUTION: If you are using a shared storage device,
  72. when you upgrade and power on and start the operating system,
  73. it is of vital importance that only one node has access to the
  74. cluster disk. Otherwise the cluster disks can become
  75. corrupted. To prevent the corruption of the cluster disks,
  76. shut down all but one cluster node, or use other techniques
  77. (for example, LUN masking, selective presentation, or zoning)
  78. to protect the cluster disks, before creating the cluster.
  79. Once the Cluster service is running properly on one node, the
  80. other nodes can be installed and configured simultaneously.
  81. 8. On the running node, note the dependencies of the IIS instance
  82. resource. Note the resources that depend on the IIS resource and
  83. also note what resources IIS itself is dependent on.
  84. 9. Take the group containing the IIS instance resource offline by
  85. using Cluster Administrator and clicking the Groups folder. In
  86. the details pane, click the group containing the IIS resource,
  87. then on the File menu, click Take Offline.
  88. 10. Remove any dependencies on the IIS instance resource by using
  89. Cluster Administrator and clicking the Resources folder. For
  90. each resource that is dependent on the IIS instance resource,
  91. in the details pane, click the resource you want to modify, then
  92. on the File menu, click Properties. On the Dependencies tab,
  93. click Modify. Click the IIS resource in the Dependencies list
  94. and click the left arrow to move it to the Available resources
  95. list.
  96. 11. Delete the IIS instance resource by using Cluster Administrator
  97. and clicking the Resource folder in the console tree. In the
  98. details pane, click the IIS instance resource, then on the File
  99. menu, click Delete.
  100. 12. Delete the unsupported resource type. Open the Command Prompt
  101. and type the following command and press ENTER:
  102. Cluster restype "IIS Virtual Root" /delete /type
  103. 13. Stop the Cluster service on the remaining node.
  104. 14. Upgrade the operating system on the running node. For general
  105. information about running Setup, see EntSrv4.TXT.
  106. The cluster software will be upgraded automatically during the
  107. operating system upgrade. Note that you cannot make
  108. configuration changes such as configuring cluster disks as
  109. dynamic disks. After you upgrade, close Manage Your Server if
  110. it is displayed.
  111. Note: When upgrading from Windows NT Server 4.0 to
  112. Windows Server 2003, Enterprise Edition, the Cluster
  113. service user account requires the additional user right "Act
  114. as part of the operating system." If possible, Setup will
  115. grant this user right automatically. If Setup cannot grant
  116. the user right, you will be prompted to make this change.
  117. For security reasons, you must grant this user right to the
  118. specific user account that is used by the Cluster Server
  119. service. You cannot correct this problem by granting the user
  120. right to a security group of which the user account is a
  121. member. Typically, you must grant this user right as a local
  122. user right; it cannot be a domain-level user right. However,
  123. if your node is a domain controller, you can use the domain
  124. level user right.
  125. Manage Your Server will appear when you initially log on to
  126. the newly upgraded node as an Administrator. Close Manage
  127. Your Server to continue with the upgrade. For more
  128. information on setting user rights, on
  129. Windows NT Server 4.0, open User Manager for Domains, click
  130. the Help menu in User Manager and refer to "Managing
  131. the User Rights Policy."
  132. 15. Start the Cluster service on the upgraded node.
  133. 16. Reconfigure the Cluster service on the upgraded node to start
  134. automatically.
  135. 17. Shut down and turn off the upgraded node, or bring it to a
  136. shutdown state appropriate to your method of termination.
  137. 18. Turn on the other node in the cluster and upgrade the operating
  138. system on that node. Manage Your Server will appear when you
  139. initially log on to the newly upgraded node as an Administrator.
  140. Close Manage Your Server to continue with the upgrade.
  141. CAUTION: If you are using a shared storage device,
  142. when you upgrade and power on and start the operating system,
  143. it is of vital importance that only one node has access to
  144. the cluster disk. Otherwise the cluster disks can become
  145. corrupted. To prevent the corruption of the cluster disks,
  146. shut down all but one cluster node, or use other techniques
  147. (for example, LUN masking, selective presentation, or zoning)
  148. to protect the cluster disks, before creating the cluster.
  149. Once the Cluster service is running properly on one node,
  150. the other nodes can be installed and configured
  151. simultaneously.
  152. 19. After the second node is upgraded, start the Cluster service on
  153. the second upgraded node. The node automatically rejoins the
  154. existing cluster.
  155. 20. Reconfigure the Cluster service on the upgraded node to start
  156. automatically.
  157. 21. Turn on the first node.
  158. 22. On one of the upgraded nodes, click Start, point to Programs,
  159. point to Administrative Tools, and then click
  160. Cluster Administrator.
  161. 23. Check to see that the cluster disks are online in
  162. Cluster Administrator.
  163. CAUTION: Be sure that the cluster disks are online in
  164. Cluster Administrator before continuing to the next step.
  165. When the disks are online, it means that the Cluster service
  166. is working, which means that only one node can access the
  167. cluster storage at any given time. Otherwise the cluster
  168. storage could be corrupted.
  169. 24. If you do not already have a Distributed Transaction Coordinator
  170. (DTC) resource on the cluster that you are upgrading, create a
  171. DTC resource on this cluster.
  172. Note: To cluster IIS on Windows Server 2003,
  173. Enterprise Edition, you must have a DTC resource on that
  174. cluster as well.
  175. 25. On the node that used to contain the IIS resource, create a
  176. Generic Script resource by following the procedure documented
  177. in "Checklist: Creating a clustered IIS Web or FTP service." To
  178. find this procedure, click Start on the upgraded node, click
  179. Help and Support, and click Availability and Scalability. Click
  180. Windows Clustering, click Server Clusters, click Checklists:
  181. Creating Server Clusters, then click Checklist: Creating a
  182. clustered IIS Web or FTP service. You can also view this Help
  183. and Support Center topic on the Web at:
  184. http://www.microsoft.com/windowsserver2003/proddoc/
  185. Recreate the dependencies of the Generic Script resource
  186. identical to those of the now deleted IIS resource. Make
  187. everything that was dependent on the IIS resource dependent
  188. instead on the Generic Script resource. In addition, make the
  189. Generic Script resource dependent on everything that IIS was
  190. dependent on.
  191. 26. Start the W3SVC service on all nodes and set the service to
  192. start automatically. For more information about the W3SVC, see
  193. the topic titled "Internet Information Services (IIS)
  194. security." To find this topic, click Start on the upgraded node,
  195. click Help and Support, and click Internet Services. Click
  196. Internet Information Services, then click Internet Information
  197. Services (IIS) security. You can also view this Help and Support
  198. Center topic on the Web at:
  199. http://www.microsoft.com/windowsserver2003/proddoc/
  200. 27. Bring the group containing the new Generic Script resource
  201. online by using Cluster Administrator and clicking the Resources
  202. folder. In the details pane, click the Generic Script resource,
  203. then on the File menu click Bring Online.
  204. 28. Using IIS, start the Web site.
  205. 29. If you want to add additional nodes to the cluster, add them
  206. after the first two nodes are upgraded.
  207. IMPORTANT: If your goal is to have more than two nodes
  208. in the cluster, you must use Fibre Channel (not SCSI) for the
  209. cluster storage. Before adding additional nodes, ensure that
  210. your entire cluster solution is compatible with products in
  211. the Windows Server 2003 family.
  212. ======================================================================
  213. 4.0 INSTALLING ON CLUSTER NODES
  214. ======================================================================
  215. Before beginning the installation of a server cluster, review
  216. EntSrv1.TXT, EntSrv2.TXT, EntSrv3.TXT, and EntSrv4.TXT for general
  217. information about Setup. Also see the list of resources in "Other
  218. Sources of Information" in EntSrv5.TXT.
  219. For information about upgrading on cluster nodes, see the list of
  220. sections at the beginning of this text file.
  221. For information about installing on cluster nodes, see the sections
  222. that follow. These sections provide important information about:
  223. * How to plan for a new cluster installation
  224. * Decisions that you need to make regarding your quorum resource
  225. (the resource that maintains the definitive copy of the cluster
  226. configuration data and that must always be available for the
  227. cluster to run)
  228. ---------------------------------------
  229. 4.1 Planning for Cluster Installation
  230. ---------------------------------------
  231. Before carrying out cluster installation, you need to plan hardware
  232. and network details.
  233. CAUTION: If you are using a shared storage device, before
  234. creating a cluster, when you turn on the computer and start the
  235. operating system, it is very important that only one node has
  236. access to the cluster disk. Otherwise, the cluster disks can
  237. become corrupted. To prevent the corruption of the cluster disks,
  238. shut down all but one cluster node, or use other techniques (for
  239. example, LUN masking, selective presentation, or zoning) to
  240. protect the cluster disks, before creating the cluster. Once the
  241. Cluster service is running properly on one node, the other nodes
  242. can be installed and configured simultaneously. Each node of your
  243. cluster must be running Windows Server 2003,
  244. Enterprise Edition.
  245. In your planning, review the following items:
  246. Cluster hardware and drivers
  247. ----------------------------
  248. Microsoft supports only complete server cluster systems that are
  249. compatible with the Windows Server 2003 family. Confirm that your
  250. entire cluster solution is compatible with products in the
  251. Windows Server 2003 family by checking the hardware compatibility
  252. information in the Windows Catalog at:
  253. http://www.microsoft.com/windows/catalog/
  254. For cluster disks, you must use the NTFS file system and configure
  255. the disks as basic disks. You cannot configure cluster disks as
  256. dynamic disks, and you cannot use features of dynamic disks such as
  257. spanned volumes (volume sets).
  258. Review the manufacturer's instructions carefully before you begin
  259. installing cluster hardware. Otherwise the cluster storage could be
  260. corrupted. If your cluster hardware includes a SCSI bus, be sure to
  261. carefully review any instructions about termination of the SCSI bus
  262. and configuration of SCSI IDs.
  263. To simplify configuration and eliminate potential compatibility
  264. problems, consider using identical hardware for all nodes.
  265. Network adapters on the cluster nodes
  266. -------------------------------------
  267. In your planning, decide what kind of communication each network
  268. adapter will carry. The following list provides details about the
  269. types of communication that an adapter can carry:
  270. * Only node-to-node communication (private network). This implies
  271. that the server has one or more additional adapters to carry
  272. other communication.
  273. For node-to-node communication, you connect the network adapter
  274. to a private network that is used exclusively within the cluster.
  275. Note that if the private network uses a single hub or network
  276. switch, that piece of equipment becomes a potential point of
  277. failure in your cluster.
  278. The nodes of a cluster must be on the same subnet but you can use
  279. virtual LAN (VLAN) switches on the interconnects between two
  280. nodes. If you use a VLAN, the point-to-point, round-trip latency
  281. must be less than 1/2 second and the link between two nodes must
  282. appear as a single point-to-point connection from the perspective
  283. of the Windows operating system running on the nodes. To avoid
  284. single points of failure, use independent VLAN hardware for the
  285. different paths between the nodes.
  286. If your nodes use multiple private (node-to-node) networks, it is
  287. a best practice for the adapters for those networks to use static
  288. IP addresses instead of DHCP.
  289. * Only client-to-cluster communication (public network). This
  290. implies that the server has one or more additional adapters to
  291. carry other communication.
  292. * Both node-to-node and client-to-cluster communication (mixed
  293. network). When you have multiple network adapters per node, a
  294. network adapter that carries both kinds of communication can
  295. serve as a backup for other network adapters.
  296. * Communication unrelated to the cluster. If a clustered node also
  297. provides services unrelated to the cluster, and there are enough
  298. adapters in the cluster node, you might want to use one adapter
  299. for carrying communication unrelated to the cluster.
  300. The nodes of a cluster must be connected by two or more local area
  301. networks (LANs); at least two networks are required to prevent a
  302. single point of failure. A server cluster whose nodes are connected by
  303. only one network is not a supported configuration. The adapters,
  304. cables, hubs, and switches for each network must fail independently.
  305. This usually implies that the components of any two networks must be
  306. physically independent.
  307. At least two networks must be configured to handle "All
  308. communications (mixed network)" or "Internal cluster communications
  309. only (private network)."
  310. The recommended configuration for two adapters is to use one adapter
  311. for the private (node-to-node only) communication and the other
  312. adapter for mixed communication (node-to-node plus client-to-cluster
  313. communication). Do not use teaming network adapters on the
  314. private network.
  315. If you use fault tolerant network adapters, create multiple private
  316. networks instead of a single fault-tolerant network.
  317. Do not configure a default gateway or DNS or WINS server on the
  318. private network adapters. Do not configure private network adapters to
  319. use name resolution servers on the public network; otherwise a name
  320. resolution server on the public network might map a name to an IP
  321. address on the private network. If a client then received that IP
  322. address from the name resolution server, it might fail to reach the
  323. address because no route from the client to the private network
  324. address exists.
  325. Configure WINS and/or DNS servers on the public network adapters. If
  326. Network Name resources are used on the public networks, set up the DNS
  327. servers to support dynamic updates; otherwise the Network Name
  328. resources may not fail over correctly. Also, configure a default
  329. gateway on the public network adapters. If there are multiple public
  330. networks in the cluster, configure a default gateway on only one
  331. of these.
  332. When you use either the New Server Cluster Wizard or the Add Nodes
  333. Wizard to install clustering on a node that contains two network
  334. adapters, by default the wizard configures both of the network
  335. adapters for mixed network communications. As a best practice,
  336. reconfigure one adapter for private network communications only. For
  337. more information, see "Change how the cluster uses a network" in Help
  338. and Support Center for Windows Server 2003, Enterprise Edition.
  339. To open Help and Support Center, after completing Setup, click Start,
  340. and then click Help and Support. You can also view Help and Support
  341. Center topics on the Web at:
  342. http://www.microsoft.com/windowsserver2003/proddoc/
  343. Consider choosing a name for each connection that tells what it is
  344. intended for. The name will make it easier to identify the connection
  345. whenever you are configuring the server.
  346. Manually configure the communication settings, such as Speed, Duplex
  347. Mode, Flow Control and Media Type of each cluster network adapter. Do
  348. not use automatic detection. You must configure all of the cluster
  349. network adapters to use the same communication settings.
  350. The adapters on a given node must connect to networks using different
  351. subnet masks.
  352. Do not use the same IP address for two network adapters, even if they
  353. are connected to two different networks.
  354. Notes: Confirm that your entire cluster solution is
  355. compatible with the products in the Windows Server 2003
  356. family. For more information, see "Cluster hardware and drivers"
  357. earlier in this section.
  358. If you use a crossover cable to implement a private network, when
  359. the cluster is created on the first node the second node should be
  360. turned on but stopped in the BIOS or at the startup menu. In this
  361. state, the Media Sense feature of Windows might not recognize the
  362. network adapter as connected. If you continue creating the cluster,
  363. the crossover cable will be detected when you start the second
  364. node. The network will be established in the default mode, which
  365. is mixed. You can then change the network mode to private using
  366. Cluster Administrator.
  367. Cluster IP address
  368. ------------------
  369. Obtain a static IP address for the cluster itself. You cannot use
  370. DHCP for this address.
  371. IP addressing for cluster nodes
  372. -------------------------------
  373. Determine how to handle the IP addressing for the individual cluster
  374. nodes. Each network adapter on each node requires IP addressing. It is
  375. a best practice to assign each network adapter a static IP address. As
  376. an alternative, you can provide IP addressing through DHCP. If you use
  377. static IP addresses, set the addresses for each linked pair of network
  378. adapters (linked node-to-node) to be on the same subnet.
  379. Note that if you use DHCP for the individual cluster nodes, it can
  380. act as a single point of failure. That is, if you set up your cluster
  381. nodes so that they depend on a DHCP server for their IP addresses,
  382. temporary failure of the DHCP server can mean temporary unavailability
  383. of the cluster nodes. When deciding whether to use DHCP, evaluate ways
  384. to ensure availability of DHCP services, and consider the possibility
  385. of using long leases for the cluster nodes. This helps to ensure that
  386. they always have a valid IP address.
  387. Cluster name
  388. ------------
  389. Determine or obtain an appropriate name for the cluster. This is the
  390. name administrators will use for connections to the cluster. (The
  391. actual applications running on the cluster typically have different
  392. network names.) The cluster name must be different from the domain
  393. name, from all computer names on the domain, and from other cluster
  394. names on the domain.
  395. Computer accounts and domain assignment for cluster nodes
  396. ---------------------------------------------------------
  397. Make sure that the cluster nodes all have computer accounts in the
  398. same domain. Cluster nodes cannot be in a workgroup.
  399. Operator user account for installing and configuring the
  400. Cluster service
  401. --------------------------------------------------------
  402. To install and configure the Cluster service, you must be using an
  403. account that is in the local Administrators group on each node. As
  404. you install and configure each node, if you are not using an account
  405. in the local Administrators group, you will be prompted to provide the
  406. logon credentials for such an account.
  407. Cluster service user account
  408. ----------------------------
  409. Create or obtain the Cluster service user account. This is the name
  410. and password under which the Cluster service will run. You need to
  411. supply this user name and password during cluster installation.
  412. It is best if the Cluster service user account is an account not used
  413. for any other purpose. If you have multiple clusters, set up a unique
  414. Cluster service user account for each cluster. The account must be a
  415. domain account; it cannot be a local account. However, do not make
  416. this account a domain administrator account because it does not need
  417. domain administrator user rights.
  418. As part of the cluster setup process, the Cluster service user
  419. account is added to the local Administrators group on each node. As
  420. well as being a member of the local Administrators group, the Cluster
  421. service user account requires an additional set of user rights:
  422. * Act as part of the operating system.
  423. * Back up files and directories.
  424. * Adjust memory quotas for a process.
  425. * Increase scheduling priority.
  426. * Log on as a service.
  427. * Restore files and directories.
  428. These user rights are also granted to the Cluster service user
  429. account as part of the cluster setup process. Be aware that the
  430. Cluster service user account will continue to have these user rights
  431. even after all nodes are evicted from the cluster. The risk that this
  432. presents is mitigated by the fact that these user rights are not
  433. granted domain wide, but rather only locally on each former node.
  434. However, remove this account from each evicted node if it is no
  435. longer needed.
  436. Be sure to keep the password from expiring on the Cluster service
  437. user account (follow your organization's policies for password
  438. renewal).
  439. Volume for important cluster configuration information
  440. (checkpoint and log files)
  441. ------------------------------------------------------
  442. Plan on setting aside a volume on your cluster storage for holding
  443. important cluster configuration information. This information makes up
  444. the cluster quorum resource, which is needed when a cluster node stops
  445. functioning. The quorum resource provides node-independent storage of
  446. crucial data needed by the cluster. For important information on
  447. quorum resource options, see "Quorum Resource Options" later in this
  448. text file.
  449. The recommended minimum size for the volume is 500 MB. It is
  450. recommended that you do not store user data on any volume in the
  451. quorum resource.
  452. Note: When planning and carrying out disk configuration
  453. for the cluster disks, configure them as basic disks with all
  454. partitions formatted as NTFS (they can be either compressed or
  455. uncompressed). Partition and format all disks on the cluster
  456. storage device before adding the first node to your cluster. Do
  457. not configure them as dynamic disks, and do not use spanned
  458. volumes (volume sets), or Remote Storage on the cluster disks. For
  459. the 64-bit version of Windows Server 2003, Enterprise Edition,
  460. cluster disks on the cluster storage device must be partitioned as
  461. MBR and not as GPT disks.
  462. -----------------------------
  463. 4.2 Quorum Resource Options
  464. -----------------------------
  465. With server clusters on Windows Server 2003, Enterprise Edition,
  466. you can now choose between three ways to set up the quorum resource
  467. (the resource that maintains the definitive copy of the cluster
  468. configuration data and that must always be available for the cluster
  469. to run).
  470. The first is a single node server cluster, which has been available
  471. in the past and continues to be supported. A single node cluster is
  472. often used for development and testing and can be configured with, or
  473. without, external cluster storage devices. For single node clusters
  474. without an external cluster storage device, the local disk is
  475. configured as the cluster quorum device.
  476. The second option is a single quorum device server cluster, which has
  477. also been available in earlier Windows versions. This model places the
  478. cluster configuration data on a shared cluster storage device that all
  479. nodes can access. This is the most common model and is recommended for
  480. most situations. You might choose the single quorum device model if
  481. all of your cluster nodes are in the same location and you want to
  482. take advantage of the fact that such a cluster continues supporting
  483. users even if only one node is running.
  484. The third option, which is new for Windows Server 2003,
  485. Enterprise Edition, is a "majority node set." A majority node set is a
  486. single quorum resource from a server-cluster perspective; however, the
  487. cluster configuration data is actually stored on multiple disks across
  488. the cluster. The majority node set resource ensures that the cluster
  489. configuration data is kept consistent across the different disks.
  490. In the majority node set model, every node in the cluster uses a
  491. directory on its own local system disk to store the cluster
  492. configuration data. If the configuration of the cluster changes, that
  493. change is reflected across the different disks. Be aware that it is
  494. also possible to have shared storage devices in a majority node set
  495. cluster. The exact configuration depends on the requirements for your
  496. installation.
  497. Only use a majority node set cluster in targeted scenarios, such as:
  498. * Geographically dispersed cluster: A cluster that spans
  499. multiple sites.
  500. * Eliminating single points of failure: Although when using a
  501. single cluster storage device the quorum disk itself can be made
  502. highly available via RAID, the controller port or the Host Bus
  503. Adapter (HBA) itself may be a single point of failure.
  504. * Clusters with no shared disks: There are some specialized
  505. configurations that need tightly consistent cluster features
  506. without having shared disks.
  507. * Clusters that host applications that can fail over, but where
  508. there is some other, application-specific way, to replicate or
  509. mirror data between nodes: For example, this model is useful if
  510. you use database log shipping for keeping a SQL database state
  511. up to date.
  512. Do not configure your cluster as a majority node set cluster unless
  513. it is part of a cluster solution offered by your Original Equipment
  514. Manufacturer (OEM), Independent Software Vendor (ISV), or Independent
  515. Hardware Vendor (IHV).
  516. 4.2.1 Cluster Model Considerations
  517. -----------------------------------
  518. Before implementing your cluster, consider what type of quorum
  519. resource solution you plan to use. Take into consideration the
  520. following differences between single quorum device clusters and
  521. majority node set clusters.
  522. Note: The following information is presented to help you
  523. make basic decisions about the placement and management of your
  524. cluster nodes and quorum resource. It does not provide all the
  525. details about the requirements for each cluster model, or how each
  526. model handles failover situations. If you are not sure which model
  527. to use or where you want to place your cluster nodes, install
  528. Windows Server 2003, Enterprise Edition, on the first cluster
  529. node, then consult the on-line help cluster documentation in Help
  530. and Support Center for Windows Server 2003, Enterprise
  531. Edition. See "Using a Majority Node Set" later in this text file
  532. for more information on how to access Help and Support Center.
  533. Node failover behavior
  534. ----------------------
  535. The failover behavior of the majority node set is significantly
  536. different from the behavior of the single quorum device model:
  537. * Using the single quorum device model, you can maintain cluster
  538. availability with only a single operational node.
  539. * If you use a majority node set, more than half, or (Number of
  540. nodes configured in the cluster/2) + 1 nodes must be operational
  541. to maintain cluster availability. The following table shows the
  542. number of node failures that a given majority node set cluster
  543. can tolerate yet continue to operate:
  544. ===================================================================
  545. NUMBER OF NODES NUMBER OF NODE NUMBER OF NODES
  546. CONFIGURED IN THE FAILURES ALLOWED NEEDED TO CONTINUE
  547. CLUSTER BEFORE CLUSTER FAILURE CLUSTER OPERATIONS
  548. -------------------------------------------------------------------
  549. 1 0 1
  550. 2 0 2
  551. 3 1 2
  552. 4 1 3
  553. 5 2 3
  554. 6 2 4
  555. 7 3 4
  556. 8 3 5
  557. Geographic considerations
  558. -------------------------
  559. You would commonly use a single quorum resource model if all nodes in
  560. your cluster will be in the same geographical location. As part of
  561. this requirement, your nodes must be connected to the same physical
  562. storage device.
  563. A majority node set on the other hand would typically be appropriate
  564. if you have geographically dispersed nodes. The cluster configuration
  565. data is stored locally on each node on a file share that is shared out
  566. to the other nodes on the network. However, those shares must always
  567. be accessible or nodes can fail.
  568. There are other specific requirements for geographically dispersed
  569. clusters, including the requirement that round-trip latency of the
  570. network between cluster nodes be a maximum of 500 milliseconds. For
  571. information on cluster solutions that meet all requirements for a
  572. geographically dispersed cluster, refer to hardware compatibility
  573. information in the Windows Catalog at:
  574. http://www.microsoft.com/windows/catalog/
  575. Hardware
  576. --------
  577. Microsoft supports only complete server cluster systems that are
  578. compatible with the Windows Server 2003 family of products. For
  579. both cluster models, confirm that your system or hardware components,
  580. including your cluster disks, are compatible with products in the
  581. Windows Server 2003 family by checking the hardware compatibility
  582. information in the Windows Catalog at:
  583. http://www.microsoft.com/windows/catalog/
  584. 4.2.2 Using a Majority Node Set
  585. ---------------------------------
  586. This section tells how to obtain additional information about the
  587. majority node set model. For a description of a majority node set,
  588. see "Quorum Resource Options" earlier in this text file.
  589. IMPORTANT: Before implementing a majority node set, it is
  590. highly recommended that you read the online clustering
  591. documentation in Help and Support Center to thoroughly understand
  592. all the considerations, requirements, and restrictions for each
  593. type of quorum resource solution.
  594. >>>TO OBTAIN ADDITIONAL INFORMATION ABOUT MAJORITY NODE SET MODEL
  595. 1. If Windows Server 2003, Enterprise Edition, is not already
  596. installed, install Windows Server 2003, Enterprise Edition,
  597. on the first node, as documented later in this text file.
  598. 2. On the first node, click Start, and then click Help and Support.
  599. 3. Click "Availability and Scalability."
  600. 4. Click "Windows Clustering."
  601. 5. Click "Server Clusters."
  602. 6. Click "Concepts."
  603. 7. Click "Planning Your Server Cluster."
  604. 8. Click "Choosing a Cluster Model."
  605. 9. Read the documentation describing the different options for the
  606. quorum resource.
  607. 10. Follow the procedure outlined in the topic titled "To create a
  608. cluster."
  609. 11. Install or upgrade to Windows Server 2003, Enterprise
  610. Edition, on the remaining nodes.
  611. Note: You can also view Help and Support Center topics on
  612. the Web at:
  613. http://www.microsoft.com/windowsserver2003/proddoc/
  614. ======================================================================
  615. 5.0 BEGINNING THE CLUSTER INSTALLATION ON THE FIRST CLUSTER NODE
  616. ======================================================================
  617. The steps you carry out when first physically connecting and
  618. installing the cluster hardware are crucial. Be sure to follow the
  619. hardware manufacturer's instructions for these initial steps.
  620. IMPORTANT: Carefully review your network cables after
  621. connecting them. Make sure no cables are crossed by mistake (for
  622. example, private network connected to public).
  623. 5.1 Initial Steps to Carry Out in the BIOS or EFI When Using a
  624. SCSI Shared Storage Device
  625. ----------------------------------------------------------------
  626. If you are using a SCSI shared storage device, when you first attach
  627. your cluster hardware (the shared bus and cluster storage), be sure to
  628. work only from the firmware configuration screens on the cluster nodes
  629. (a node is a server in a cluster). On a 32-bit computer, use the BIOS
  630. configuration screens. On an Itanium architecture-based computer, use
  631. the Extensible Firmware Interface (EFI) configuration screens. The
  632. instructions from your manufacturer will describe whether these
  633. configuration screens are displayed automatically or whether you must,
  634. after turning on the computer, press specific keys to access them.
  635. Follow the manufacturer's instructions for completing the BIOS or EFI
  636. configuration process. Remain in the BIOS or EFI configuration
  637. screens, and do not allow the operating system to start, during this
  638. initial installation phase. Complete the following steps while the
  639. cluster nodes are still displaying BIOS or EFI configuration screens,
  640. before starting the operating system on the first cluster node.
  641. IMPORTANT: If your cluster nodes are Itanium architecture-based
  642. computers, use a fibre channel bus instead of a SCSI bus.
  643. * Make sure you understand and follow the manufacturer's
  644. instructions for termination of the SCSI bus.
  645. * Make sure that each device on the shared bus (both SCSI
  646. controllers and hard disks) has a unique SCSI ID. If the SCSI
  647. controllers all have the same default ID (often it is SCSI ID 7),
  648. change one controller to a different SCSI ID, such as SCSI ID 6.
  649. If there is more than one disk that will be on the shared SCSI
  650. bus, each disk must also have a unique SCSI ID. In addition, make
  651. sure that the bus is not configured to reset SCSI IDs
  652. automatically on startup (otherwise the IDs will change from the
  653. settings you specify).
  654. * Ensure that you can scan the bus and see the drives from all
  655. cluster nodes (while remaining in the BIOS or EFI configuration
  656. screens).
  657. 5.2 Initial Steps to Carry Out in the BIOS or EFI When Using a
  658. Fibre Channel Shared Storage Device or No Shared Storage Device
  659. ---------------------------------------------------------------------
  660. * Turn on a single node. Leave all other nodes turned off.
  661. * During this initial installation phase, remain in the BIOS or
  662. Extensible Firmware Interface (EFI) configuration process, and do
  663. not allow the operating system to start. While viewing the BIOS
  664. or EFI configuration screens, ensure that you can scan the bus
  665. and see the drives from the active cluster node. On a 32-bit
  666. computer, use the BIOS configuration screens. On an Itanium
  667. architecture-based computer, use the EFI configuration screens.
  668. Consult the instructions from your manufacturer to determine
  669. whether these configuration screens are displayed automatically
  670. or whether you must, after turning on the computer, press
  671. specific keys to access them. Follow the manufacturer's
  672. instructions for completing the BIOS or EFI
  673. configuration process.
  674. 5.3 Final Steps to Complete the Installation
  675. ----------------------------------------------
  676. If you have not already installed Windows Server 2003,
  677. Enterprise Edition, on the first cluster node, install it before
  678. proceeding. For information about decisions you must make, such as
  679. decisions about licensing, see EntSrv2.TXT and EntSrv3.TXT. For
  680. information about running Setup, see EntSrv4.TXT.
  681. After you complete the BIOS or EFI configuration, start the operating
  682. system on one cluster node only, and complete the configuration of the
  683. Cluster service using Cluster Administrator.
  684. With the Cluster Administrator New Server Cluster Wizard, you can
  685. choose between Typical (full) configuration and Advanced (minimum)
  686. configuration options. Typical configuration is appropriate for most
  687. installations and results in a completely configured cluster. Use the
  688. Advanced configuration option only for clusters that have complex
  689. storage configurations that the New Server Cluster Wizard cannot
  690. validate or for configurations in which you do not want the cluster
  691. to manage all of the storage. The following examples describe each
  692. situation:
  693. * In some complex storage solutions, such as a fiber channel
  694. switched fabric that contains several switches, a particular
  695. storage unit might have a different identity on each computer in
  696. the cluster. Although this is a valid storage configuration, it
  697. violates the storage validation heuristics in the New Server
  698. Cluster Wizard. If you have this type of storage solution, you
  699. might receive an error when you are trying to create a cluster
  700. using the Typical configuration option. If your storage
  701. configuration is set up correctly, you can disable the storage
  702. validation heuristics and avoid this error by restarting the New
  703. Server Cluster Wizard, selecting the Advanced configuration
  704. option instead.
  705. * On particular nodes in a cluster, you may want to have some disks
  706. that are to be clustered and some disks that are to be kept
  707. private. The Typical configuration option configures all disks as
  708. clustered disks and creates cluster resources for them all.
  709. However, with the Advanced configuration option, you can keep
  710. certain disks private because this configuration creates a
  711. cluster in which only the quorum disk is managed by the cluster
  712. (if you chose to use a physical disk as the quorum resource).
  713. After the cluster is created, you must then use Cluster
  714. Administrator to add any other disks that you want the cluster to
  715. manage.
  716. If you are using a shared storage device: Before creating a cluster,
  717. when you turn the computer on and start the operating system, it is
  718. very important that only one node has access to the cluster disk.
  719. Otherwise, the cluster disks can become corrupted. To prevent the
  720. corruption of the cluster disks, shut down all but one cluster node,
  721. or use other techniques (for example, LUN masking, selective
  722. presentation, or zoning) to protect the cluster disks before creating
  723. the cluster. Also, before starting the installation of the second and
  724. subsequent nodes, ensure that all disks that are to be managed by the
  725. cluster have disk resources associated with them. If these disks do
  726. not have disk resources associated with them at this time, the disk
  727. data will be corrupted because the disks will not be protected and
  728. multiple nodes will attempt to connect to them at the same time.
  729. >>> TO SET UP YOUR CLUSTER USING CLUSTER ADMINISTRATOR
  730. 1. Open Cluster Administrator by clicking Start, pointing to
  731. Programs, pointing to Administrative Tools, and then clicking
  732. Cluster Administrator.
  733. 2. In the Open Connection to Cluster dialog box that appears, in
  734. Action, select Create new cluster, then click OK.
  735. 3. The New Server Cluster Wizard appears. Click Next to continue.
  736. 4. Upon completion of the New Server Cluster Wizard, click Finish.
  737. IMPORTANT: During the cluster creation process (using the
  738. Quorum button on the Proposed Cluster Configuration page) you
  739. will be able to select a quorum resource type (that is, a
  740. Local Quorum resource, Physical Disk or other storage class
  741. device resource, or Majority Node Set resource). For
  742. information on how these quorum resource types relate to the
  743. different cluster models, see "Quorum Resource Options"
  744. earlier in this text file.
  745. Do not use Manage Your Server or the Configure Your Server Wizard to
  746. configure cluster nodes.
  747. >>>TO OBTAIN ADDITIONAL INFORMATION ABOUT HOW TO INSTALL AND CONFIGURE
  748. THE CLUSTER SERVICE
  749. 1. After completing Setup of Windows Server 2003, Enterprise
  750. Edition, click Start, and then click Help and Support.
  751. 2. Click "Availability and Scalability."
  752. 3. Click "Windows Clustering."
  753. 4. Click "Server Clusters."
  754. 5. Click "Checklists: Creating Server Clusters," and then click
  755. "Checklist: Planning and creating a server cluster."
  756. 6. Use the checklist to guide you through the process of completing
  757. the installation of your server cluster.
  758. Unattended Installation
  759. -----------------------
  760. To create and configure a cluster after unattended Setup, run a
  761. script to invoke the cluster /create: command and supply all the
  762. necessary configuration information on the command line. For more
  763. information on creating a cluster using unattended installation, after
  764. you install Windows Server 2003, Enterprise Edition, see "To
  765. create a cluster" in Help and Support Center. To open Help and Support
  766. Center, after completing Setup, click Start, and then click Help and
  767. Support. Also, see the Windows Server 2003 Deployment Kit,
  768. especially "Automating and Customizing Installations."
  769. You can also view Help and Support Center topics on the Web at:
  770. http://www.microsoft.com/windowsserver2003/proddoc/
  771. Information in this document, including URL and other Internet
  772. Web site references, is subject to change without notice.
  773. Unless otherwise noted, the example companies, organizations,
  774. products, domain names, e-mail addresses, logos, people, places
  775. and events depicted herein are fictitious, and no association
  776. with any real company, organization, product, domain name,
  777. e-mail address, logo, person, place or event is intended or
  778. should be inferred. Complying with all applicable copyright laws
  779. is the responsibility of the user. Without limiting the rights
  780. under copyright, no part of this document may be reproduced,
  781. stored in or introduced into a retrieval system, or transmitted
  782. in any form or by any means (electronic, mechanical, photocopying,
  783. recording, or otherwise), or for any purpose, without the express
  784. written permission of Microsoft Corporation.
  785. Microsoft may have patents, patent applications, trademarks,
  786. copyrights, or other intellectual property rights covering subject
  787. matter in this document. Except as expressly provided in any
  788. written license agreement from Microsoft, the furnishing of this
  789. document does not give you any license to these patents, trademarks,
  790. copyrights, or other intellectual property.
  791. (c) 2003-2003 Microsoft Corporation. All rights reserved.
  792. The names of actual companies and products mentioned herein may
  793. be the trademarks of their respective owners.