Leaked source code of windows server 2003
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

785 lines
36 KiB

  1. **********************************************************************
  2. Windows Server 2003, Datacenter Edition
  3. Setup Text Files, Part 4 of 4:
  4. Installing on Cluster Nodes
  5. **********************************************************************
  6. This part of the text file series provides information about
  7. installing on cluster nodes. With Windows Server 2003,
  8. Datacenter Edition, you can use clustering to ensure that users have
  9. constant access to important server-based resources. With clustering,
  10. you create several cluster nodes that appear to users as one server.
  11. If one of the nodes in the cluster fails, another node begins to
  12. provide service (a process known as failover). Critical applications
  13. and resources remain continuously available.
  14. For more information about the deployment of servers, see the
  15. Microsoft Windows Server 2003 Deployment Kit. You can view the
  16. Windows Deployment and Resource Kits on the Web at:
  17. http://www.microsoft.com/reskit/
  18. The following list of headings can help you find the information that
  19. applies to you. For information about planning an upgrade or a new
  20. installation, see Datactr1.TXT. For information about running Setup,
  21. see Datactr2.TXT. For information about upgrading on cluster nodes,
  22. see Datactr2.TXT and Datactr3.TXT.
  23. Contents
  24. --------
  25. 1.0 Installing on Cluster Nodes
  26. 2.0 Beginning the Cluster Installation on the First Cluster
  27. Node
  28. ======================================================================
  29. 1.0 Installing on Cluster Nodes
  30. ======================================================================
  31. For information about installing on cluster nodes, see the sections
  32. that follow. These sections can help you learn about:
  33. * Sources of additional information
  34. * How to plan for a new cluster installation
  35. * Decisions that you need to make regarding your quorum resource
  36. (the resource that maintains the definitive copy of the cluster
  37. configuration data and that must always be available for the
  38. cluster to run)
  39. -------------------------------------
  40. 1.1 Important Information to Review
  41. -------------------------------------
  42. To prepare for installing clustering:
  43. * Review Datactr1.TXT and Datactr2.TXT for general information
  44. about Setup.
  45. * As described in Datactr1.TXT, confirm that your hardware,
  46. including your cluster storage, is compatible with products in
  47. the Windows Server 2003 family by checking the hardware
  48. compatibility information in the Windows Catalog at:
  49. http://www.microsoft.com/windows/catalog/
  50. * In addition, check with the manufacturer of your cluster storage
  51. hardware to be sure you have the drivers you need to use the
  52. hardware with Windows Server 2003, Datacenter Edition.
  53. IMPORTANT: You must ensure that hardware in your entire
  54. cluster solution is compatible with products in the
  55. Windows Server 2003 family. For more information, see
  56. "Cluster hardware and drivers" in the "Planning for Cluster
  57. Installation" section later in this text file.
  58. ----------------------------------
  59. 1.2 Other Sources of Information
  60. ----------------------------------
  61. Following are sources of additional information on server clusters
  62. and other topics related to high availability:
  63. * For more information on server clusters, you can view Help and
  64. Support Center for Windows Server 2003, Datacenter Edition,
  65. on the Web. One way to view this information is to work from any
  66. computer that has Internet access (regardless of the operating
  67. system running on that computer). You can view Help and Support
  68. Center topics at:
  69. http://www.microsoft.com/windowsserver2003/proddoc/
  70. Another way to view this information is to open Help and Support
  71. Center. To do this, go to a computer running
  72. Windows Server 2003, Enterprise Edition, or
  73. Windows Server 2003, Datacenter Edition, click Start, and
  74. then click Help and Support.
  75. * For more information about backing up and restoring data and
  76. configuration information, see the Microsoft Windows
  77. Server 2003 Resource Kit, "Server Management Guide."
  78. * For more information about the following topics, see
  79. the Windows Server 2003 Deployment Kit, "Planning
  80. Server Deployments":
  81. * Deployment planning for server clusters and Network Load
  82. Balancing clusters
  83. * Planning for high availability (only available on the Windows
  84. Deployment and Resource Kits Web site)
  85. You can view the Windows Deployment and Resource Kits on the
  86. Web at:
  87. http://www.microsoft.com/reskit/
  88. * For information about backup and recovery planning, change
  89. management, configuration management, and other concepts related
  90. to operational best practices, see resources in the Information
  91. Technology Infrastructure Library (ITIL). To see a description of
  92. ITIL, go to:
  93. http://www.itil.co.uk/
  94. Note: Web addresses can change, so you might be unable to
  95. connect to the Web site mentioned here.
  96. ---------------------------------------
  97. 1.3 Planning for Cluster Installation
  98. ---------------------------------------
  99. Before carrying out cluster installation, you need to plan hardware
  100. and network details.
  101. CAUTION: If you are using a shared storage device, before creating
  102. a cluster, when you turn on the computer and start the operating
  103. system, it is very important that only one node has access to the
  104. cluster disk. Otherwise, the cluster disks can become corrupted.
  105. To prevent the corruption of the cluster disks, shut down all but
  106. one cluster node, or use other techniques (for example, LUN
  107. masking, selective presentation, or zoning) to protect the cluster
  108. disks, before creating the cluster. Once the Cluster service is
  109. running properly on one node, the other nodes can be installed and
  110. configured simultaneously. Each node of your cluster must be
  111. running Windows Server 2003, Datacenter Edition.
  112. In your planning, review the following items:
  113. Cluster hardware and drivers
  114. ----------------------------
  115. Microsoft supports only complete server cluster systems that are
  116. compatible with the Windows Server 2003 family. Confirm that your
  117. entire cluster solution is compatible with products in the
  118. Windows Server 2003 family by checking the hardware
  119. compatibility information in the Windows Catalog at:
  120. http://www.microsoft.com/windows/catalog/
  121. For cluster disks, you must use the NTFS file system and configure
  122. the disks as basic disks. You cannot configure cluster disks as
  123. dynamic disks, and you cannot use features of dynamic disks such as
  124. spanned volumes (volume sets).
  125. Review the manufacturer's instructions carefully before you begin
  126. installing cluster hardware. Otherwise the cluster storage could
  127. be corrupted.
  128. To simplify configuration and eliminate potential compatibility
  129. problems, consider using identical hardware for all nodes.
  130. Network adapters on the cluster nodes
  131. -------------------------------------
  132. In your planning, decide what kind of communication each network
  133. adapter will carry. The following list provides details about the
  134. types of communication that an adapter can carry:
  135. * Only node-to-node communication (private network). This implies
  136. that the server has one or more additional adapters to carry
  137. other communication.
  138. For node-to-node communication, you connect the network adapter
  139. to a private network that is used exclusively within the cluster.
  140. Note that if the private network uses a single hub or network
  141. switch, that piece of equipment becomes a potential point of
  142. failure in your cluster.
  143. The nodes of a cluster must be on the same subnet but you can use
  144. virtual LAN (VLAN) switches on the interconnects between two
  145. nodes. If you use a VLAN, the point-to-point, round-trip latency
  146. must be less than 1/2 second and the link between two nodes must
  147. appear as a single point-to-point connection from the perspective
  148. of the Windows operating system running on the nodes. To avoid
  149. single points of failure, use independent VLAN hardware for the
  150. different paths between the nodes.
  151. If your nodes use multiple private (node-to-node) networks, it is
  152. a best practice for the adapters for those networks to use static
  153. IP addresses instead of DHCP.
  154. * Only client-to-cluster communication (public network). This
  155. implies that the server has one or more additional adapters to
  156. carry other communication.
  157. * Both node-to-node and client-to-cluster communication (mixed
  158. network). When you have multiple network adapters per node, a
  159. network adapter that carries both kinds of communication can
  160. serve as a backup for other network adapters.
  161. * Communication unrelated to the cluster. If a clustered node also
  162. provides services unrelated to the cluster, and there are enough
  163. adapters in the cluster node, you might want to use one adapter
  164. for carrying communication unrelated to the cluster.
  165. The nodes of a cluster must be connected by two or more local area
  166. networks (LANs); at least two networks are required to prevent a
  167. single point of failure. A server cluster whose nodes are connected by
  168. only one network is not a supported configuration. The adapters,
  169. cables, hubs, and switches for each network must fail independently.
  170. This usually implies that the components of any two networks must be
  171. physically independent.
  172. At least two networks must be configured to handle "All
  173. communications (mixed network)" or "Internal cluster communications
  174. only (private network)."
  175. The recommended configuration for two adapters is to use one adapter
  176. for the private (node-to-node only) communication and the other
  177. adapter for mixed communication (node-to-node plus client-to-cluster
  178. communication). Do not use teaming network adapters on the
  179. private network.
  180. If you use fault tolerant network adapters, create multiple private
  181. networks instead of a single fault-tolerant network.
  182. Do not configure a default gateway or DNS or WINS server on the
  183. private network adapters. Do not configure private network adapters to
  184. use name resolution servers on the public network; otherwise a name
  185. resolution server on the public network might map a name to an IP
  186. address on the private network. If a client then received that IP
  187. address from the name resolution server, it might fail to reach the
  188. address because no route from the client to the private network
  189. address exists.
  190. Configure WINS and/or DNS servers on the public network adapters. If
  191. Network Name resources are used on the public networks, set up the DNS
  192. servers to support dynamic updates; otherwise the Network Name
  193. resources may not fail over correctly. Also, configure a default
  194. gateway on the public network adapters. If there are multiple public
  195. networks in the cluster, configure a default gateway on only one
  196. of these.
  197. When you use either the New Server Cluster Wizard or the Add Nodes
  198. Wizard to install clustering on a node that contains two network
  199. adapters, by default the wizard configures both of the network
  200. adapters for mixed network communications. As a best practice,
  201. reconfigure one adapter for private network communications only. For
  202. more information, see "Change how the cluster uses a network" in Help
  203. and Support Center for Windows Server 2003, Datacenter Edition.
  204. To open Help and Support Center, after completing Setup, click Start,
  205. and then click Help and Support. You can also view Help and Support
  206. Center topics on the Web at:
  207. http://www.microsoft.com/windowsserver2003/proddoc/
  208. Consider choosing a name for each connection that tells what it is
  209. intended for. The name will make it easier to identify the connection
  210. whenever you are configuring the server.
  211. Manually configure the communication settings, such as Speed, Duplex
  212. Mode, Flow Control and Media Type of each cluster network adapter. Do
  213. not use automatic detection. You must configure all of the cluster
  214. network adapters to use the same communication settings.
  215. The adapters on a given node must connect to networks using different
  216. subnet masks.
  217. Do not use the same IP address for two network adapters, even if they
  218. are connected to two different networks.
  219. Note: Confirm that your entire cluster solution is compatible with
  220. the products in the Windows Server 2003 family. For more
  221. information, see "Cluster hardware and drivers" earlier in this
  222. section. If you use a crossover cable to implement a private
  223. network, when the cluster is created on the first node the second
  224. node should be turned on but stopped in the BIOS or at the startup
  225. menu. In this state, the Media Sense feature of Windows might not
  226. recognize the network adapter as connected. If you continue
  227. creating the cluster, the crossover cable will be detected when
  228. you start the second node. The network will be established in the
  229. default mode, which is mixed. You can then change the network mode
  230. to private using Cluster Administrator.
  231. Cluster IP address
  232. ------------------
  233. Obtain a static IP address for the cluster itself. You cannot use
  234. DHCP for this address.
  235. IP addressing for cluster nodes
  236. -------------------------------
  237. Determine how to handle the IP addressing for the individual cluster
  238. nodes. Each network adapter on each node requires IP addressing. It is
  239. a best practice to assign each network adapter a static IP address. As
  240. an alternative, you can provide IP addressing through DHCP. If you use
  241. static IP addresses, set the addresses for each linked pair of network
  242. adapters (linked node-to-node) to be on the same subnet.
  243. Note that if you use DHCP for the individual cluster nodes, it can
  244. act as a single point of failure. That is, if you set up your cluster
  245. nodes so that they depend on a DHCP server for their IP addresses,
  246. temporary failure of the DHCP server can mean temporary unavailability
  247. of the cluster nodes. When deciding whether to use DHCP, evaluate ways
  248. to ensure availability of DHCP services, and consider the possibility
  249. of using long leases for the cluster nodes. This helps to ensure that
  250. they always have a valid IP address.
  251. Cluster name
  252. ------------
  253. Determine or obtain an appropriate name for the cluster. This is the
  254. name administrators will use for connections to the cluster. (The
  255. actual applications running on the cluster typically have different
  256. network names.) The cluster name must be different from the domain
  257. name, from all computer names on the domain, and from other cluster
  258. names on the domain.
  259. Computer accounts and domain assignment for cluster nodes
  260. ---------------------------------------------------------
  261. Make sure that the cluster nodes all have computer accounts in the
  262. same domain. Cluster nodes cannot be in a workgroup.
  263. Operator user account for installing and configuring the Cluster
  264. service
  265. ----------------------------------------------------------------
  266. To install and configure the cluster service, you must be using an
  267. account that is in the local Administrators group on each node. As you
  268. install and configure each node, if you are not using an account in
  269. the local Administrators group, you will be prompted to provide the
  270. logon credentials for such an account.
  271. Cluster service user account
  272. ----------------------------
  273. Create or obtain the Cluster service user account. This is the name
  274. and password under which the Cluster service will run. You need to
  275. supply this user name and password during cluster installation.
  276. It is best if the Cluster service user account is an account not used
  277. for any other purpose. If you have multiple clusters, set up a unique
  278. Cluster service user account for each cluster. The account must be a
  279. domain account; it cannot be a local account. However, do not make
  280. this account a domain administrator account because it does not need
  281. domain administrator user rights.
  282. As part of the cluster setup process, the Cluster service user
  283. account is added to the local Administrators group on each node. As
  284. well as being a member of the local Administrators group, the Cluster
  285. service user account requires an additional set of user rights:
  286. * Act as part of the operating system.
  287. * Back up files and directories.
  288. * Adjust memory quotas for a process.
  289. * Increase scheduling priority.
  290. * Log on as a service.
  291. * Restore files and directories.
  292. These user rights are also granted to the Cluster service user
  293. account as part of the cluster setup process. Be aware that the
  294. Cluster service user account will continue to have these user rights
  295. even after all nodes are evicted from the cluster. The risk that this
  296. presents is mitigated by the fact that these user rights are not
  297. granted domain wide, but rather only locally on each former node.
  298. However, remove this account from each evicted node if it is no longer
  299. needed.
  300. Be sure to keep the password from expiring on the Cluster service
  301. user account (follow your organization's policies for password
  302. renewal).
  303. Volume for important cluster configuration information (checkpoint and
  304. log files)
  305. ----------------------------------------------------------------------
  306. Plan on setting aside a volume on your cluster storage for holding
  307. important cluster configuration information. This information makes up
  308. the cluster quorum resource, which is needed when a cluster node stops
  309. functioning. The quorum resource provides node-independent storage of
  310. crucial data needed by the cluster. For important information on
  311. quorum resource options, see "Quorum Resource Options" later in this
  312. text file.
  313. The recommended minimum size for the volume is 500 MB. It is
  314. recommended that you do not store user data on any volume in the
  315. quorum resource.
  316. Note: When planning and carrying out disk configuration for the
  317. cluster disks, configure them as basic disks with all partitions
  318. formatted as NTFS (they can be either compressed or uncompressed).
  319. Partition and format all disks on the cluster storage device
  320. before adding the first node to your cluster. Do not configure
  321. them as dynamic disks, and do not use spanned volumes
  322. (volume sets), or Remote Storage on the cluster disks. For the
  323. 64-bit version of Windows Server 2003, Datacenter Edition,
  324. cluster disks on the cluster storage device must be partitioned
  325. as MBR and not as GPT disks.
  326. -----------------------------
  327. 1.4 Quorum Resource Options
  328. -----------------------------
  329. With server clusters on Windows Server 2003, Datacenter Edition,
  330. you can now choose between three ways to set up the quorum resource
  331. (the resource that maintains the definitive copy of the cluster
  332. configuration data and that must always be available for the cluster
  333. to run). The first is a single node server cluster, which has been
  334. available in the past and continues to be supported. A single node
  335. cluster is often used for development and testing and can be
  336. configured with, or without, external cluster storage devices. For
  337. single node clusters without an external cluster storage device, the
  338. local disk is configured as the cluster quorum device.
  339. The second option is a single quorum device server cluster, which has
  340. also been available in earlier Windows versions. This model places the
  341. cluster configuration data on a shared cluster storage device that all
  342. nodes can access. This is the most common model and is recommended for
  343. most situations. You might choose the single quorum device model if
  344. all of your cluster nodes are in the same location and you want to
  345. take advantage of the fact such a cluster continues supporting users
  346. even if only one node is running.
  347. The third option, which is new for Windows Server 2003,
  348. Datacenter Edition, is a "majority node set." A majority node set is
  349. a single quorum resource from a server cluster perspective; however,
  350. the cluster configuration data is actually stored on multiple disks
  351. across the cluster. The majority node set resource ensures that the
  352. cluster configuration data is kept consistent across the different
  353. disks. In the majority node set model, every node in the cluster uses
  354. a directory on its own local system disk to store the cluster
  355. configuration data. If the configuration of the cluster changes, that
  356. change is reflected across the different disks. Be aware that it is
  357. also possible to have shared storage devices in a majority node set
  358. cluster. The exact configuration depends on the requirements for your
  359. installation.
  360. Only use a majority node set cluster in targeted scenarios, such as:
  361. * Geographically dispersed cluster: A cluster that spans
  362. multiple sites.
  363. * Eliminating single points of failure: Although when using a
  364. single cluster storage device the quorum disk itself can be made
  365. highly available via RAID, the controller port or the Host Bus
  366. Adapter (HBA) itself may be a single point of failure.
  367. * Clusters with no shared disks: There are some specialized
  368. configurations that need tightly consistent cluster features
  369. without having shared disks.
  370. * Clusters that host applications that can fail over, but where
  371. there is some other, application-specific way, to replicate or
  372. mirror data between nodes: For example, this model is useful if
  373. you use database log shipping for keeping a SQL database state up
  374. to date.
  375. Do not configure your cluster as a majority node set cluster unless
  376. it is part of a cluster solution offered by your Original Equipment
  377. Manufacturer (OEM), Independent Software Vendor (ISV), or Independent
  378. Hardware Vendor (IHV).
  379. 1.4.1 Cluster Model Considerations
  380. -----------------------------------
  381. Before implementing your cluster, consider what type of quorum
  382. resource solution you plan to use. Take into consideration the
  383. following differences between single quorum device clusters and
  384. majority node set clusters.
  385. Note: The following information is presented to help you make
  386. basic decisions about the placement and management of your cluster
  387. nodes and quorum resource. It does not provide all the details
  388. about the requirements for each cluster model, or how each model
  389. handles failover situations. If you are not sure which model to
  390. use or where you want to place your cluster nodes, install
  391. Windows Server 2003, Datacenter Edition, on the first cluster
  392. node, then consult the on-line help cluster documentation in Help
  393. and Support Center for Windows Server 2003, Datacenter
  394. Edition. See "Using a Majority Node Set" later in this text file
  395. for more information on how to access Help and Support Center.
  396. Node failover behavior
  397. ----------------------
  398. The failover behavior of the majority node set is significantly
  399. different from the behavior of the single quorum device model:
  400. * Using the single quorum device model, you can maintain cluster
  401. availability with only a single operational node.
  402. * If you use a majority node set, more than half, or (Number of
  403. nodes configured in the cluster/2) + 1 nodes must be operational
  404. to maintain cluster availability. The following table shows the
  405. number of node failures that a given majority node set cluster
  406. can tolerate yet continue to operate:
  407. ===================================================================
  408. NUMBER OF NODES NUMBER OF NODE NUMBER OF NODES
  409. CONFIGURED IN THE FAILURES ALLOWED NEEDED TO CONTINUE
  410. CLUSTER BEFORE CLUSTER FAILURE CLUSTER OPERATIONS
  411. -------------------------------------------------------------------
  412. 1 0 1
  413. 2 0 2
  414. 3 1 2
  415. 4 1 3
  416. 5 2 3
  417. 6 2 4
  418. 7 3 4
  419. 8 3 5
  420. Geographic considerations
  421. -------------------------
  422. You would commonly use a single quorum resource model if all nodes in
  423. your cluster will be in the same geographical location. As part of
  424. this requirement, your nodes must be connected to the same physical
  425. storage device.
  426. A majority node set on the other hand would typically be appropriate
  427. if you have geographically dispersed nodes. The cluster configuration
  428. data is stored locally on each node on a file share that is shared out
  429. to the other nodes on the network. However, those shares must always
  430. be accessible or nodes can fail.
  431. There are other specific requirements for geographically dispersed
  432. clusters, including the requirement that round-trip latency of the
  433. network between cluster nodes be a maximum of 500 milliseconds. For
  434. information on cluster solutions that meet all requirements for a
  435. geographically dispersed cluster, refer to hardware compatibility
  436. information in the Windows Catalog at:
  437. http://www.microsoft.com/windows/catalog/
  438. Hardware
  439. --------
  440. Microsoft supports only complete server cluster systems that are
  441. compatible with the Windows Server 2003 family of products. For
  442. both cluster models, confirm that your system or hardware components,
  443. including your cluster disks, are compatible with products in the
  444. Windows Server 2003 family by checking the hardware compatibility
  445. information in the Windows Catalog at:
  446. http://www.microsoft.com/windows/catalog/
  447. 1.4.2 Using a Majority Node Set
  448. --------------------------------
  449. This section tells how to obtain additional information about the
  450. majority node set model. For a description of a majority node set,
  451. see "Quorum Resource Options" earlier in this text file.
  452. IMPORTANT: Before implementing a majority node set, it is highly
  453. recommended that you read the online clustering documentation in
  454. Help and Support Center to thoroughly understand all the
  455. considerations, requirements, and restrictions for each type of
  456. quorum resource solution.
  457. >>>TO OBTAIN ADDITIONAL INFORMATION ABOUT MAJORITY NODE SET MODEL
  458. 1. If Windows Server 2003, Datacenter Edition, is not already
  459. installed, install Windows Server 2003, Datacenter Edition,
  460. on the first node, as documented later in this text file.
  461. 2. On the first node, click Start, and then click Help and Support.
  462. 3. Click "Availability and Scalability."
  463. 4. Click "Windows Clustering."
  464. 5. Click "Server Clusters."
  465. 6. Click "Concepts."
  466. 7. Click "Planning Your Server Cluster."
  467. 8. Click "Choosing a Cluster Model."
  468. 9. Read the documentation describing the different options for the
  469. quorum resource.
  470. 10. Follow the procedure outlined in the topic titled "To create a
  471. cluster."
  472. 11. Install or upgrade to Windows Server 2003, Datacenter
  473. Edition, on the remaining nodes.
  474. Note: You can also view Help and Support Center topics on
  475. the Web at:
  476. http://www.microsoft.com/windowsserver2003/proddoc/
  477. ======================================================================
  478. 2.0 Beginning the Cluster Installation on the First Cluster Node
  479. ======================================================================
  480. The steps you carry out when first physically connecting and
  481. installing the cluster hardware are crucial. Be sure to follow the
  482. hardware manufacturer's instructions for these initial steps.
  483. IMPORTANT: Carefully review your network cables after connecting
  484. them. Make sure no cables are crossed by mistake (for example,
  485. private network connected to public).
  486. 2.1 Initial Steps to Carry Out in the BIOS or EFI When Using a
  487. Fibre Channel Shared Storage Device or No Shared Storage Device
  488. ---------------------------------------------------------------------
  489. * Turn on a single node. Leave all other nodes turned off.
  490. * During this initial installation phase, remain in the BIOS or
  491. Extensible Firmware Interface (EFI) configuration process, and
  492. do not allow the operating system to start. While viewing the
  493. BIOS or EFI configuration screens, ensure that you can scan the
  494. bus and see the drives from the active cluster node. On a 32-bit
  495. computer, use the BIOS configuration screens. On an Itanium
  496. architecture-based computer, use the EFI configuration screens.
  497. Consult the instructions from your manufacturer to determine
  498. whether these configuration screens are displayed automatically
  499. or whether you must, after turning on the computer, press
  500. specific keys to access them. Follow the manufacturer's
  501. instructions for completing the BIOS or EFI
  502. configuration process.
  503. 2.2 Final Steps to Complete the Installation
  504. ----------------------------------------------
  505. If you have not already installed Windows Server 2003,
  506. Datacenter Edition, on the first cluster node, install it before
  507. proceeding. For information about decisions you must make, such as
  508. decisions about licensing, see Datactr1.TXT. For information about
  509. running Setup, see Datactr2.TXT.
  510. After you complete the BIOS or EFI configuration, start the operating
  511. system on one cluster node only, and complete the configuration of the
  512. Cluster service using Cluster Administrator.
  513. With the Cluster Administrator New Server Cluster Wizard, you can
  514. choose between Typical (full) configuration and Advanced (minimum)
  515. configuration options. Typical configuration is appropriate for most
  516. installations and results in a completely configured cluster. Use
  517. the Advanced configuration option only for clusters that have complex
  518. storage configurations that the New Server Cluster Wizard cannot
  519. validate or for configurations in which you do not want the cluster
  520. to manage all of the storage. The following examples describe
  521. each situation:
  522. * In some complex storage solutions, such as a fiber channel
  523. switched fabric that contains several switches, a particular
  524. storage unit might have a different identity on each computer in
  525. the cluster. Although this is a valid storage configuration, it
  526. violates the storage validation heuristics in the New Server
  527. Cluster Wizard. If you have this type of storage solution, you
  528. might receive an error when you are trying to create a cluster
  529. using the Typical configuration option. If your storage
  530. configuration is set up correctly, you can disable the storage
  531. validation heuristics and avoid this error by restarting the New
  532. Server Cluster Wizard, selecting the Advanced configuration
  533. option instead.
  534. * On particular nodes in a cluster, you may want to have some disks
  535. that are to be clustered and some disks that are to be kept
  536. private. The Typical configuration option configures all disks as
  537. clustered disks and creates cluster resources for them all.
  538. However, with the Advanced configuration option, you can keep
  539. certain disks private because this configuration creates a
  540. cluster in which only the quorum disk is managed by the cluster
  541. (if you chose to use a physical disk as the quorum resource).
  542. After the cluster is created, you must then use Cluster
  543. Administrator to add any other disks that you want the cluster to
  544. manage.
  545. If you are using a shared storage device: Before creating a cluster,
  546. when you turn the computer on and start the operating system, it is
  547. very important that only one node has access to the cluster disk.
  548. Otherwise, the cluster disks can become corrupted. To prevent the
  549. corruption of the cluster disks, shut down all but one cluster node,
  550. or use other techniques (for example, LUN masking, selective
  551. presentation, or zoning) to protect the cluster disks before creating
  552. the cluster. Also, before starting the installation of the second and
  553. subsequent nodes, ensure that all disks that are to be managed by the
  554. cluster have disk resources associated with them. If these disks do
  555. not have disk resources associated with them at this time, the disk
  556. data will be corrupted because the disks will not be protected and
  557. multiple nodes will attempt to connect to them at the same time.
  558. >>>TO SET UP YOUR CLUSTER USING CLUSTER ADMINISTRATOR
  559. 1. Open Cluster Administrator by clicking Start, pointing to
  560. Programs, pointing to Administrative Tools, and then clicking
  561. Cluster Administrator.
  562. 2. In the Open Connection to Cluster dialog box that appears, in
  563. Action, select Create new cluster, then click OK.
  564. 3. The New Server Cluster Wizard appears. Click Next to continue.
  565. 4. Upon completion of the New Server Cluster Wizard, click Finish.
  566. IMPORTANT: During the cluster creation process (using the
  567. Quorum button on the Proposed Cluster Configuration page)
  568. you will be able to select a quorum resource type (that is,
  569. a Local Quorum resource, Physical Disk or other storage
  570. class device resource, or Majority Node Set resource). For
  571. information on how these quorum resource types relate to the
  572. different cluster models, see "Quorum Resource Options"
  573. earlier in this text file. Do not use Manage Your Server or
  574. the Configure Your Server Wizard to configure cluster nodes.
  575. >>>TO OBTAIN ADDITIONAL INFORMATION ABOUT HOW TO INSTALL AND CONFIGURE
  576. THE CLUSTER SERVICE
  577. 1. After completing Setup of Windows Server 2003, Datacenter
  578. Edition, click Start, and then click Help and Support.
  579. 2. Click "Availability and Scalability."
  580. 3. Click "Windows Clustering."
  581. 4. Click "Server Clusters."
  582. 5. Click "Checklists: Creating Server Clusters," and then click
  583. "Checklist: Planning and creating a server cluster."
  584. 6. Use the checklist to guide you through the process of completing
  585. the installation of your server cluster.
  586. Unattended Installation
  587. -----------------------
  588. To create and configure a cluster after unattended Setup, run a
  589. script to invoke the cluster /create: command and supply all the
  590. necessary configuration information on the command line. For more
  591. information on creating a cluster using unattended installation, after
  592. you install Windows Server 2003, Datacenter Edition, see "To
  593. create a cluster" in Help and Support Center. To open Help and Support
  594. Center, after completing Setup, click Start, and then click Help and
  595. Support. Also, see the Windows Server 2003 Deployment Kit,
  596. especially "Automating and Customizing Installations."
  597. You can also view Help and Support Center topics on the Web at:
  598. http://www.microsoft.com/windowsserver2003/proddoc/
  599. Information in this document, including URL and other Internet
  600. Web site references, is subject to change without notice.
  601. Unless otherwise noted, the example companies, organizations,
  602. products, domain names, e-mail addresses, logos, people, places
  603. and events depicted herein are fictitious, and no association
  604. with any real company, organization, product, domain name,
  605. e-mail address, logo, person, place or event is intended or
  606. should be inferred. Complying with all applicable copyright laws
  607. is the responsibility of the user. Without limiting the rights
  608. under copyright, no part of this document may be reproduced,
  609. stored in or introduced into a retrieval system, or transmitted
  610. in any form or by any means (electronic, mechanical, photocopying,
  611. recording, or otherwise), or for any purpose, without the express
  612. written permission of Microsoft Corporation.
  613. Microsoft may have patents, patent applications, trademarks,
  614. copyrights, or other intellectual property rights covering subject
  615. matter in this document. Except as expressly provided in any
  616. written license agreement from Microsoft, the furnishing of this
  617. document does not give you any license to these patents, trademarks,
  618. copyrights, or other intellectual property.
  619. (c) 2002-2003 Microsoft Corporation. All rights reserved.
  620. The names of actual companies and products mentioned herein may
  621. be the trademarks of their respective owners.