Managing the Windows Server Platform - File Service ...



1325880608965Managing the Windows Server Platform File Service Product?Operations?GuideThe information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS plying with all applicable copyright laws is the responsibility of the user.? Without limiting the rights under copyright, this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), but only for the purposes provided in the express written permission of Microsoft Corporation. ?Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, email address, logo, person, place, or event is intended or should be inferred. ? 2003 Microsoft Corporation. All rights reserved.Microsoft, Active Directory, ActiveX, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.The names of actual companies and products mentioned herein may be the trademarks of their respective owners.Contents TOC \h \z \t "ChapTitle,1,Head2,3,Head1,2" Introduction to Product Operations Guide PAGEREF _Toc49162689 \h 1Document Purpose PAGEREF _Toc49162690 \h 1Intended Audience PAGEREF _Toc49162691 \h 1How to Use This Guide PAGEREF _Toc49162692 \h 1Background PAGEREF _Toc49162693 \h 2High-Level Processes for Maintaining Windows Server 2003 File Service PAGEREF _Toc49162694 \h 5Overview PAGEREF _Toc49162695 \h 5Required Technology PAGEREF _Toc49162696 \h 6Maintenance Processes Checklist PAGEREF _Toc49162697 \h 7Operating Quadrant PAGEREF _Toc49162698 \h 7Supporting Quadrant PAGEREF _Toc49162699 \h 11Optimizing Quadrant PAGEREF _Toc49162700 \h 13Changing Quadrant PAGEREF _Toc49162701 \h 15Detailed Maintenance Processes PAGEREF _Toc49162702 \h 17Overview PAGEREF _Toc49162703 \h 17Process: Data backup, restore, and recovery operations PAGEREF _Toc49162704 \h 18Task: Run daily incremental backup PAGEREF _Toc49162705 \h 18Process: Data backup, restore, and recovery operations PAGEREF _Toc49162706 \h 20Task: Run weekly normal backup PAGEREF _Toc49162707 \h 20Process: Design for service recovery PAGEREF _Toc49162708 \h 22Task: Update automated system recovery (ASR) backup PAGEREF _Toc49162709 \h 22Process: Design for service recovery PAGEREF _Toc49162710 \h 24Task: Validate ASR recovery PAGEREF _Toc49162711 \h 24Process: Maintaining the directory PAGEREF _Toc49162712 \h 26Task: Back up DFS namespace configuration PAGEREF _Toc49162713 \h 26Process: Data backup, restore, and recovery operations PAGEREF _Toc49162714 \h 27Task: Verify previous day’s backup job PAGEREF _Toc49162715 \h 27Process: Storage resource management PAGEREF _Toc49162716 \h 30Task: Monitor available disk space PAGEREF _Toc49162717 \h 30Process: Storage resource management PAGEREF _Toc49162718 \h 33Task: Review disk fragmentation PAGEREF _Toc49162719 \h 33Process: Data backup, restore, and recovery operations PAGEREF _Toc49162720 \h 36Task: Verify restore PAGEREF _Toc49162721 \h 36Process: Managing resources and service performance PAGEREF _Toc49162722 \h 37Task: Capture service performance statistics PAGEREF _Toc49162723 \h 37Task: Capture service usage statistics PAGEREF _Toc49162724 \h 41Process: Perform monitoring PAGEREF _Toc49162725 \h 44Task: Review quota levels PAGEREF _Toc49162726 \h 44Process: Reviewing configuration items PAGEREF _Toc49162727 \h 46Task: Compliance check—verify that shares are created in the proper location PAGEREF _Toc49162728 \h 46Process: Problem recording and classification PAGEREF _Toc49162729 \h 47Task: Review daily problem management report PAGEREF _Toc49162730 \h 47Process: Investigation and diagnosis PAGEREF _Toc49162731 \h 49Task: Create weekly service activity report PAGEREF _Toc49162732 \h 49Process: Incident closure PAGEREF _Toc49162733 \h 51Task: Roll up activity report into monthly metric PAGEREF _Toc49162734 \h 51Process: Managing resources and service performance PAGEREF _Toc49162735 \h 52Task: Captures size of DFS namespace PAGEREF _Toc49162736 \h 52Process: Managing resources and service performance PAGEREF _Toc49162737 \h 54Task: Create quota report PAGEREF _Toc49162738 \h 54Task: Create a service performance and usage report PAGEREF _Toc49162739 \h 55Task: Create a system load and utility report PAGEREF _Toc49162740 \h 56Process: Managing the directory PAGEREF _Toc49162741 \h 57Task: Check status of DFS PAGEREF _Toc49162742 \h 57Process: Investigation and diagnosis PAGEREF _Toc49162743 \h 59Task: Respond to daily service request PAGEREF _Toc49162744 \h 59Process: Change classification and authorization PAGEREF _Toc49162745 \h 61Task: Attend CAB meeting PAGEREF _Toc49162746 \h 61Task: Review emergency change request PAGEREF _Toc49162747 \h 62Process: Reviewing configuration items PAGEREF _Toc49162748 \h 65Task: Capture configuration snapshot PAGEREF _Toc49162749 \h 65Processes by MOF Role Clusters PAGEREF _Toc49162750 \h 69Operations Role Cluster PAGEREF _Toc49162751 \h 69Support Role Cluster PAGEREF _Toc49162752 \h 70Release Role Cluster PAGEREF _Toc49162753 \h 70Infrastructure Role Cluster PAGEREF _Toc49162754 \h 71Security Role Cluster PAGEREF _Toc49162755 \h 72Partner Role Cluster PAGEREF _Toc49162756 \h 72Troubleshooting PAGEREF _Toc49162757 \h 73Overview PAGEREF _Toc49162758 \h 73Problem #1: “Path not found” or empty folder PAGEREF _Toc49162759 \h 73Problem #2: Slow connection time PAGEREF _Toc49162760 \h 74Problem #3: How to troubleshoot FRS-enabled DFS directories PAGEREF _Toc49162761 \h 75Problem #4: Using Defrag.exe on a disk that hosts FRS-replicated content PAGEREF _Toc49162762 \h 76Problem #5: DFS links not visible PAGEREF _Toc49162763 \h 77Problem #6: DFS root does not appear in MMC PAGEREF _Toc49162764 \h 78Problem #7: NTFS file system log file size bottlenecks PAGEREF _Toc49162765 \h 79Problem #8: Excessive CPU use by Clussvc.exe or Rsrcmon.exe PAGEREF _Toc49162766 \h 80Problem #9: "A DFS root already exists in this cluster node" PAGEREF _Toc49162767 \h 81Problem #10: DNS name problems PAGEREF _Toc49162768 \h 82ContributorsProgram ManagerJeff Yuhas, Microsoft, USALead WritersMichael Sarabosing, Covestic, USAAkil Washington, Covestic, USAOther ContributorsSteve Barnard, Microsoft Consulting ServicesShiloh Cleofe, Microsoft CorporationTest ManagerGreg Gicewicz, Microsoft CorporationQA ManagerJim Ptaszynski, Microsoft CorporationLead Technical WriterJerry Dyer, Microsoft CorporationLead Technical EditorLaurie Dunham, Microsoft CorporationTechnical EditorsBill Karn, Volt Technical ServicesPatricia Rytkonen, Volt Technical ServicesProduction EditorKevin Klein, Volt Technical Services1Introduction to Product Operations GuideDocument PurposeThis guide describes processes and procedures for improving the management of Microsoft? Windows Server? 2003 File Service in an information technology (IT) infrastructure.Intended AudienceThis material should be useful for anyone planning to deploy this product into an existing IT infrastructure, especially one based on the IT Infrastructure Library (ITIL)—a comprehensive set of best practices for IT service management—and Microsoft Operations Framework (MOF). It is aimed primarily at two main groups: IT managers and IT support staff (including analysts and service-desk specialists).How to Use This GuideThis guide is divided into five main chapters. The first chapter provides basic background information. The second chapter provides a high-level checklist of the tasks required for maintaining this product. The third chapter takes a more detailed look at the tasks described in the maintenance section. The fourth chapter organizes tasks by the MOF role cluster responsible for each task. The fifth chapter provides information about common troubleshooting techniques for Windows Server 2003 File Service. The guide may be read as a single volume, including the detailed maintenance and troubleshooting sections. Reading the document this way will provide the necessary context so that later material can be understood more readily. However, some people will prefer to use the document as a reference, looking up information only as they need it.BackgroundThis guide is based on Microsoft Solutions for Management (MSM). MSM provides a combination of best practices, best-practice implementation services, and best-practice automation, all of which help customers achieve operational excellence as demonstrated by high quality of service, industry reliability, availability, and security, and low total cost of ownership (TCO).These MSM best practices are based on MOF, a structured, yet flexible approach centered around ITIL. MOF includes guidelines on how to plan, deploy, and maintain IT operational processes in support of mission-critical service solutions.Central to MOF—and to understanding the structure of this guide—are the MOF Process and Team Models. The Process Model and its underlying service management functions (SMFs) are the foundation for the process-based approach that this guide recommends for maintaining a product. The Team Model and its role clusters offer guidance for ensuring the proper people are assigned to operational roles.Figure 1 shows the MOF Process Model combined with the SMFs that make up each quadrant of the Process Model.Figure 1MOF Process Model and SMFsFigure 2 shows the MOF Team Model, along with some of the many functional roles or function teams that might exist in service management organizations. These roles and function teams are shown mapped to the MOF role cluster to which they would likely belong.Figure 2MOF Team Model and examples of functional roles or teamsThe MOF Team Model is built on six quality goals, which are described and matched with the applicable team role cluster in Table 1.Table 1. MOF Team Model Quality Goals and Role ClustersQuality goalTeam role clusterEffective release and change management. Accurate inventory tracking of all IT services and systems.ReleaseManagement of physical environments and infrastructure tools.InfrastructureQuality customer support and a service culture.SupportPredictable, repeatable, and automated system management.OperationsMutually beneficial relationships with service and supply partners.PartnerProtected corporate assets, controlled authorization, and proactive security planning.SecurityFurther information about MSM and MOF is available at , or search for the topic on TechNet at . You can also contact your local Microsoft or partner representative.2High-Level Processes for Maintaining Windows Server 2003 File ServiceOverviewEvery company consists of employees (people), activities that these employees perform (processes), and tools that help them perform these activities (technology). Regardless of what the business is, it most likely consists of people, processes, and technology working together to achieve a common goal. Table 2 illustrates this point.Table 2. Examples of People, Process, and Technology Working TogetherAreaPeopleProcessTechnologyAuto repair industryMechanicRepair manualSocket setSoftware development industryProgrammerProject planCompiler, debuggerIT operations IT technicianMicrosoft Operations FrameworkWindows Server 2003 File ServiceAt the heart of any IT organization is the ability to efficiently manage file resources while keeping them available and secure for users. As the network expands with more users located on-site, in remote locations, or even at partner companies, IT administrators face an increasingly heavy burden.This product operations guide combines people and process with technology to offer best-practice advice for the maintenance of Windows Server 2003 file services.Required TechnologyTable 3 lists the tools or technologies used in the procedures described in this guide. All tools should be accessed from a Windows Server 2003 server console, except in those cases where a link is provided. All tools should be accessed from a Windows Server 2003 server console, except in those cases where a link is provided.Table 3. File Service Tools or TechnologiesRequired TechnologyDescriptionLocationDisk DefragmenterTool used to analyze volumes for fragmentation. Also used to defragment volumes.Start > All Programs > Accessories > System Tools > Disk DefragmenterSystem Monitor (formerly known as Performance Monitor)Tool used to collect data on server health and performanceStart > All Programs > Administrative Tools > System MonitorEvent ViewerTool used to monitor and gather information on system, security, and application eventsStart > All Programs > Administrative Tools > Event ViewerQuota Entries windowTo be used to view and configure quotas on a volumeOn the properties page of a NTFS volumeMicrosoft Distributed File System (DFS) Microsoft Management Console (MMC)Tool used to monitor DFS namespace, including roots, links, and targetsStart > All Programs > Administrative Tools > Distributed File SystemDisk CleanupTool used to remove temporary files, Internet cache files, and unnecessary program filesStart > All Programs > Accessories > System Tools > Disk CleanupBackupTool used to perform backup and restore operationsStart > All Programs > Accessories > System Tools > BackupNtfrsutl.exeTool used for troubleshooting FRS on DFSWindows Server 2003 Support ToolsDfsutil.exeTool used to configure and troubleshoot DFS Windows Server 2003 Support ToolsRecommended TechnologyDescriptionLocationSrvinfo.exeTool used to gather system information from serversWindows Server 2003 Resource KitVolperf.exeTool used to install performance objects and counters for volume shadow serviceWindows Server 2003 Resource KitFsutil.exeTool used to manage Windows? file system\Winnt\System32Iologsum.cmdTool used for troubleshooting FRS on DFSWindows Server 2003 Support ToolsWindows Management Instrumentation (WMI)Tool used to configure WMI settings such as permissions for authorized users and groups and turning error logging on or offStart > Run > type wmimgmt.mscMaintenance Processes ChecklistThe following checklists provide a quick reference for those product maintenance processes that must be performed on a regular basis. These process lists are a summary of the processes described in subsequent sections of this guide. They are limited to those processes required for maintaining the product.Operating QuadrantThe processes for this section are based on the service management function guides that make up the MOF Operating Quadrant. For more information on the MOF Process Model and the SMFs, see and Management SMFDaily ProcessesProcess NameRelated SMFsMOF Role ClusterStorage resource managementInfrastructureData backup, restore, and recovery operationsSupportWeekly ProcessesProcess NameRelated SMFsMOF Role ClusterStorage resource managementInfrastructureData backup, restore, and recovery operationsSupportMonthly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no monthly processes for this SMF.As-Needed ProcessesProcess NameRelated SMFsMOF Role ClusterData backup, restore, and recovery operationsOperationsService Monitoring and Control SMFDaily ProcessesProcess NameRelated SMFsMOF Role ClusterPerform monitoringInfrastructureWeekly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no weekly processes for this SMF.Monthly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no monthly processes for this SMF.As-Needed ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no as-needed processes for this SMF.Directory Services Administration SMFDaily ProcessesProcess NameRelated SMFsMOF Role ClusterMaintaining the directoryInfrastructureWeekly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no weekly processes for this SMF.Monthly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no monthly processes for this SMF.As-Needed ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no as-needed processes for this SMF.Supporting QuadrantThe processes for this section are based on the SMF guides that make up the MOF Supporting Quadrant. Incident Management SMFDaily ProcessesProcess NameRelated SMFsMOF Role ClusterInvestigation and diagnosisSupportWeekly ProcessesProcess NameRelated SMFsMOF Role ClusterInvestigation and diagnosisSupportMonthly ProcessesProcess NameRelated SMFsMOF Role ClusterIncident closureOperationsAs-Needed ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no as-needed processes for this SMF.Problem Management SMFDaily ProcessesProcess NameRelated SMFsMOF Role ClusterProblem recording and classificationOperationsWeekly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no weekly processes for this SMF.Monthly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no monthly processes for this SMF.As-Needed ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no as-needed processes for this SMF.Optimizing QuadrantThe tasks for this section are based on the SMF guides that make up the MOF Optimizing Quadrant.Availability Management SMFDaily ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no daily processes for this SMF.Weekly ProcessesProcess NameRelated SMFsMOF Role ClusterDesign for recoveryOperationsMonthly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no monthly processes for this SMF.Quarterly ProcessesProcess NameRelated SMFsMOF Role ClusterDesign for recovery OperationsAs-Needed ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no as-needed processes for this SMF.OperationsCapacity Management SMFDaily ProcessesProcess NameRelated SMFsMOF Role ClusterManaging resources and service performanceOperationsManaging resources and service performanceOperationsWeekly ProcessesProcess NameRelated SMFsMOF Role ClusterManaging resources and service performanceInfrastructureMonthly ProcessesProcess NameRelated SMFsMOF Role ClusterManaging resources and service performanceInfrastructureAs-Needed ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no as-needed processes for this SMF.Changing QuadrantThe processes for this section are based on the SMF guides that make up the MOF Changing Quadrant. Change Management SMFDaily ProcessesProcess NameRelated SMFsMOF Role ClusterChange classificationInfrastructureWeekly ProcessesProcess NameRelated SMFsMOF Role ClusterChange authorizationInfrastructureMonthly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no monthly processes for this SMF.As-Needed ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no as-needed processes for this SMF.Configuration Management SMFDaily ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no daily processes for this SMF.Weekly ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no weekly processes for this SMF.Monthly ProcessesProcess NameRelated SMFsMOF Role ClusterReviewing configuration itemsInfrastructureAs-Needed ProcessesProcess NameRelated SMFsMOF Role ClusterThere are no as-needed processes for this SMF.3Detailed Maintenance ProcessesOverviewThis chapter provides detailed information about the processes that must be performed in order to maintain Windows Server 2003 File Service. The chapter is first arranged according to the MOF quadrant to which each process belongs. The quadrants are:●Operating Quadrant●Supporting Quadrant●Optimizing Quadrant●Changing QuadrantWithin each quadrant, the processes are further arranged according to the MOF SMF guides that make up that quadrant, the particular Team Model role cluster to which the process belongs, and the time (daily, weekly, monthly, or as-needed) when the process occurs.For more information about the MOF Process Model and the MOF SMF guides that make up each quadrant of the model, see . For more information about the MOF Team Model and team role clusters, see QuadrantStorage Management SMFSupport Role ClusterDaily Process: Data backup, restore, and recovery operationsDescriptionBacking up, restoring, and recovering data are key storage management activities for maintaining company data. Data should be classified by type, and a strategy should be developed to ensure that those processes fulfill business requirements and service level objectives.Task: Run daily incremental backupPurposePerforming regularly scheduled backups is an integral part of any file service operations environment. A good backup strategy should include daily incremental or differential backups as well as weekly backups. Numerous strategies exist regarding the frequency and types of backup jobs that an operations team can implement.Procedure: Configure incremental backup job1.Start Backup utility and select the Backup tab.2.Select the drives, folders, and files that will be included in the backup. It is a good idea to include the system state information as part of the backup operation.3.On the Tools menu, click Options. In the Options window, on the Backup Log tab, select Detailed, and click OK.Backup logs can be vital to troubleshooting and recording status of the backup operation. The default setting in Windows Server 2003 is for backup logs to contain such summary information as loading a tape, starting the backup, backing up files, backing up bytes, or failing to open a file. Some operations environments require more detail information—specifically, what files are being backed up for a particular backup job.4.On the Tools menu, click Options. In the Options window, click the Backup Type tab. In the Default Backup Type drop-down list, select Incremental, and click OK.5.In Backup Destination, select one of the following:●Choose file to back up files and folders to a file. This is the default setting.●Choose a tape device if you want to back up files and folders to a tape.6.In Backup media or file name, select one of the following:●If you are backing up files and folders to a file, enter the path and file name of the backup (.bkf) file.●If you are backing up files and folders to tape, choose the tape you want to use.7.Click Start Backup, then click Advanced, select Data Verification, and click OK.8.Click Schedule and enter the logon name and password that the backup will run as, and then click OK. In the Schedule Job Options window, enter the name for the backup job—for example, "ServerName-IncBackup-Date," and click OK.9.In the Schedule Job window, confirm that the Schedule tab is selected. Under Schedule Task, select Weekly and click the days of the week you want the incremental job to run. In Start time, enter the time you want the backup to start and click OK.10.Enter the logon name and password that the backup job will run as and then click OK.Click OK again to exit the window.The daily incremental job is now scheduled to run.DependenciesNoneTechnology RequiredBackup.exeOperating QuadrantStorage Management SMFSupport Role ClusterWeekly Process: Data backup, restore, and recovery operationsDescriptionStoring, restoring, and recovering data are key storage management activities for maintaining company data. Data should be classified by type, and a strategy should be developed to ensure that backup and recovery processes fulfill business requirements and service level objectives.Task: Run weekly normal backupPurposePerforming regularly scheduled backups is an integral part of any file service operations environment. A good backup strategy should include daily incremental or differential backups as well as weekly backups. Numerous strategies exist regarding the frequency and types of backup jobs that an operations team can implement.Procedure: Configure normal backup job1.Start the Backup utility.2.On the Backup tab, select the drives, folders, and files that will be included in the backup job. It is a good idea to include the system state information as part of the backup operation.3.On the Tools menu, click Options. In the Options window, click the Backup Log tab, select Detailed, and click OK. Backup logs can be vital for troubleshooting and recording status of the backup operation. The default setting in Windows Server 2003 is for backup logs to contain summary information such as loading a tape, starting the backup, backing up files, backing up bytes, or failing to open a file. Some operations environments require more detailed information—specifically, what files are being backed up for a particular backup job.4.On the Tools menu, click Options. In the Options window, click the Backup Type tab. In the Default Backup Type drop-down list, select Normal and click OK.5.In Backup Destination, select one of the following:●Choose a file to back up files and folders to a file. This is the default setting.●Choose a tape device if you want to backup files and folders to a tape.6.In Backup media or file name, select one of the following:●If you are backing up files and folders to a file, enter the path and file name of the backup (.bkf) file.●If you are backing up files and folders to tape, choose the tape you want to use.7.Click Start Backup, click Advanced, select Data Verification, and click OK.8.Click the Schedule, enter the logon name and password that the backup job will run as, and then click OK.9.In the Schedule Job Options window, enter the name for the backup job, such as "Increment Backup," and click OK.10.In the Schedule Job window, confirm the Schedule tab is selected. Under Schedule Task, select Weekly and click the days of the week you want the incremental job to run. In Start Time, enter the time you want the backup job to start, and click OK.11.Enter the logon name and password that the backup job will run as and then click OK.12.Click OK. The weekly normal job is now scheduled to run.DependenciesNoneTechnology RequiredBackup.exeOptimizing QuadrantAvailability Management SMFOperations Role ClusterWeekly Process: Design for service recoveryDescriptionRegardless of how well designed and managed an IT service is, problems with its delivery can still occur—whether as the result of an unexpected event or even the failure of a countermeasure deployed to protect the service. A major design consideration for high availability is a reactive one, charged with recovering service as quickly and efficiently as possible. Rapid recovery may also be the appropriate design choice for a particular availability risk if an effective countermeasure proves to be too expensive for the customer to justify.Task: Update automated system recovery (ASR) backupPurposeThere is a potential for a system failure during the lifetime of a file server. Several startup options, such as safe mode and last known good configuration, are available to use to recover from system failure. However, automated system recovery (ASR) backups should be included in the regular maintenance of your file server to act as a last resort in system recovery.ASR will back up the system files necessary for starting the file server. Other data should be included as part of the daily and weekly backup jobs for the server. ASR backups are performed using Backup in interactive mode. They cannot be scheduled.Procedure 1: Get media for ASR backupASR backup requires a blank 1.44-MB disk to save system settings, and media such as tapes or compact discs that will contain the backup files.1.A separate media set is recommended for ASR backups. The media set should be stored in a secure location, separate from data backup files.2.Store the 1.44-MB disk with the ASR backup set it was created with. You must have the disk that was created with the ASR backup set in order to perform ASR recovery.Procedure 2: Create ASR backup1.Click Start, point to All Programs, point to Accessories, point to System Tools, and then click Backup.2.On the Jobs menu, click New.3.On the Tools menu, click ASR Wizard.4.Follow the instructions that appear on the screen.Procedure 3: File server does not have a floppy disk drive1.Perform an ASR backup on the computer without the floppy disk drive. ASR backup will log an error. 2.Copy the Asr.sif and Asrpnp.sif files located in the %systemroot%\Repair directory to another computer with a floppy disk drive, and then copy those files onto a disk. Dependencies●File server should have a floppy disk drive. Procedure 3 provides a workaround to copy system files to a disk, but a floppy disk drive is required for ASR recovery.●You must be a member of an administrators or backup operators group to perform ASR.Technology RequiredBackup.exeOptimizing QuadrantAvailability Management SMFOperations Role ClusterQuarterly Process: Design for service recoveryDescriptionRegardless of how well designed and managed an IT service is, problems with its delivery can still occur—whether as the result of an unexpected event or even the failure of a countermeasure deployed to protect the service. A major design consideration for high availability is a reactive one, charged with recovering service as quickly and efficiently as possible. Rapid recovery may also be the appropriate design choice for a particular availability risk if an effective countermeasure proves to be too expensive for the customer to justify.Task: Validate ASR recoveryPurposeThe ASR backup must be validated in order to confirm the integrity of the backup process. The operations team must also be familiar with the hardware and software involved in the ASR recovery process.Procedure 1: Prepare for ASR recovery1.Retrieve the latest ASR backup media set and disk from the secure location. Verify that the media and disks are from the same backup.2.Retrieve the media set for the most recent normal backup of the server.3.Retrieve the original Windows Server 2003 installation CD.4.Retrieve any mass storage device driver files supplied by the manufacturer. Verify that you have this file before beginning the recovery operation.5.Configure the recovery server hardware.Procedure 2: Perform ASR recovery1.Insert the original Windows Server 2003 installation CD.2.Restart the server. If prompted to press a key to start the computer from the CD, press the requested key.3.If you have a separate driver file as described in Procedure 1, Step 4, use the driver as part of the Setup by pressing F6 when prompted.4.Press F2 when prompted at the beginning of the text-only mode section of Setup. You will be prompted to insert the ASR disk you have previously created.5.Follow the directions on the screen.6.If you have a separate driver file as described in Procedure 1, Step 4, press F6 (a second time) when prompted after the system restarts.7.Follow the directions on the screen. Procedure 3: Restore data files to the recovery server1.Start the Backup utility.2.On the Welcome tab, click Restore Wizard (Advanced).3.Click Next.4.Select the items to be restored from the latest normal backup set and click Next.5.At this point, you can click Finish to start the restore or click the Advanced button for more options. If you decide to configure Advanced options, the following is a list of items that should be selected.●Restore files to original location●Leave existing files●Restore security settings●Restore junction points but not the folders and file data they reference●Preserve existing volume mount pointsDependencies●ASR recovery requires that the recovery server have the same hardware and disk configuration as the server where the ASR backup was performed.●Perform regular ASR backup.●Manufacturer-supplied device drivers for mass storage devices.Technology RequiredBackup.exeOperating QuadrantDirectory Services Administration SMFInfrastructure Role ClusterDaily Process: Maintaining the directoryPurposeThe data contained in the directory is, or very soon will be, critical to the base operation and productivity of the organization. If the directory becomes unavailable for any reason—for example, through equipment failure or data corruption—the business will suffer from lost productivity and financial loss. Developing sound backup and restore procedures for the directory and supporting system components ensures that no critical directory data and configuration information will be lost.Task: Back up DFS namespace configurationPurposeThis task creates a backup of the DFS namespace and a restoration script to resolve issues with DFS objects.Procedure 1: Export DFS namespaceWindows Server 2003 Support Tools include the Dfsutil.exe in the can be used to export the DFS namespace configuration into a script that can be used later for restoration. The following command will export the links: Dfsutil /Root :\\dfsname\root /Export:<drive path><filename>where filename is the name of the script that will contain the DFS namespace configuration for restoration. Procedure 2: Automate DFS configuration exportThis process can be automated using Microsoft Windows Shell Scripting. The following is a simple command that can read an input file. Each line of the input file lists a DFS root:echo offfor /f %%i in (input.txt) do dfsutil.exe /view:%%i /export:<filename.txt>exitwhere filename is the name of the file that will contain the report.DependenciesNoneTechnology Required●Dfsutil.exe●Windows Host ScriptOperating QuadrantStorage Management SMFSupport Role ClusterWeekly Process: Data backup, restore, and recovery operationsDescriptionStoring, restoring, and recovering data are key storage management activities for maintaining company data. Data should be classified by type, and a strategy should be developed to ensure that operations fulfill business requirements and service level objectives.Task: Verify previous day’s backup jobPurposeThis task provides guidance on verifying the integrity of the daily scheduled backup. Regardless of the utility used to provide backup service to the file server, the operations team should verify each backup job after its completion. This verification allows the operations team to resolve issues concerning backups that may put the organization at risk of data loss.Procedure 1: Verify completion of backupYou can use Event Viewer to verify whether a backup started and completed, and if any errors were encountered during the backup operation.1.Start Event Viewer.2.Right-click Application Log, and click Properties, highlight View, and select Filter.3.In Event Source, click the drop-down menu, select Backup, and click OK.4.Search for the following events:●Event 8000. This event signals the start of a backup on a volume. You should receive this event for each volume in the backup job.●Event 8001. This event signals the end of a backup on a volume. You should receive n-1 of this event for a backup job, where n is equal to the number of volumes in the backup job. When a volume has been backed up successfully, Event 8001 will be logged as an informational event. When errors are encountered backing up a volume, Event 8001 will be logged as an error event.●Event 8019. This event signals the end of the backup operation. You should receive one 8019 event per backup job. Procedure 2: Review the backup logBackup logs can be vital for troubleshooting and recording status of the backup operation. The default setting in Windows Server 2003 is for backup logs to contain summary information such as loading a tape, starting the backup, backing up files, backing up bytes, or failing to open a file. Some operations environments require more detailed information—specifically, what files are being backed up for a particular backup job.To get more detailed logging in the backup logs1.Start the Backup utility.2.On the Tools menu, click Options.3.In the Options window, click the Backup Log tab, select Detailed, and click OK.Backup logs will now contain detailed information regarding the backup operations.To review the backup log1.Start Backup utility.2.On the Tools menu, click Reports.3.In the Backup Report dialog box, select the previous night’s backup report, and click View.Procedure 3: Report backup problems to incident managementUse your organization's incident management process to record the following conditions in your environment. This procedure describes some of the steps that should be followed when filling out the incident management report.1.Event 8000 is not logged in the application log. When this occurs, the file server is at risk of data loss. Verify that the backup job has not been deleted. Review the start time for the job to verify that it has not been modified.2.Event 8000 is not logged for all volumes on the server. When this occurs, a volume is at risk of data loss. Review the backup configuration for the backup job to see if the volume has been removed from the backup job. Check the configuration management database (CMDB) to see if the volume has been removed from the backup job.3.Event 8001 is logged as a warning event in the application log. Review the backup log by searching for the "Warning:" string in the body of the log. Record what the warning is and the reason for the warning.4.Event 8019 is not logged in the application log. This means the backup job is still running. Review the application log and record the last volume to trigger a successful 8001 informational event. Record the last volume to trigger an 8000 event. Dependencies●Backup jobs are logged to disk●Incident management processTechnology Recommended●Backup●Third-party backup softwareOperating QuadrantStorage Management SMFInfrastructure Role ClusterDailyProcess: Storage resource managementDescriptionStorage resource management (SRM) is a key storage management activity that ensures that important storage devices, such as disks, are formatted and installed with appropriate file systems.In addition, SRM includes using management technologies to monitor storage resources in order to ensure that they meet availability, capacity, and performance requirements.Task: Monitor available disk spacePurposeThis task proactively monitors disk space on a volume to control the allocation of disk space and to provide reporting for capacity planning. It mitigates any problems that may result in rapid file growth on a volume. In an IT environment, it is important to set alerts on a logical volume at differing capacity levels. Some alerts are informational so that the status of the disk volume can be reported. Other alerts are used to warn the operations team of a real problem with capacity on a volume. The following are suggested thresholds on a volume:●Sixty-five percent capacity. It is important to note that a particular volume is at 65 percent or more full. This means that volume has only 35 percent or less capacity for growth.●Seventy-five percent capacity. When a volume is 75 percent full, consider creating new shares on another volume.●Ninety percent capacity. Volumes that are at 90 percent capacity should not have file shares created on them. Volumes that are at 90 percent should be included in the problem management report.Once the 90 percent capacity threshold is reached and an alert is generated, an administrator should initiate appropriate changes such as increasing available capacity, or begin to migrate the shares to higher-capacity subsystems. Additional administrative actions might include performing disk defragmentation and disk cleanup. (See Task: Review Disk Fragmentation.)Procedure 1: Configure alert1.Start System Monitor.2.Expand the Performance Logs and Alerts node, right-click Alerts, and click New Alert Settings.3.Type Jobs Spooling and click OK.4.On the General tab, click Add, and select the following object, instance, and counter:ObjectInstanceCounterLogical Disk SpaceEach Logical Volume Instance.% Free Space5.In the Alert when the value is drop-down box, select Over and enter the limit for your environment. 6.On the Action tab, the default selection is Log an entry in the application event log.7.Select the Schedule tab, click Start Log At, and enter the start time for the alert.8.Click Apply, and then click OK.The alert is activated and will have a green status indicating that it is logging information based on the configuration and schedule.Procedure 2: Stop creating share alert●Review the event log for Event 2031:Event Type:InformationEvent Source:SysmonLogEvent Category:NoneEvent ID:2031Description:Counter: \\Servername\LogicalDisk(driveletter)\% Free Space has tripped its alert threshold. The counter value of n is under the limit value of n.When you begin to receive Event 2031, the capacity of the volume must be included in the daily problem management report. This alert can indicate when to stop creating new shares on a volume. The remaining space on the volume is used to accommodate data growth on existing shares.This alert will continue be written to the application log until the alert is stopped. It is okay to stop the alert, but once a volume has triggered the Stop Creating Share alert, the capacity should always be included in the daily problem management report.Procedure 3: Capacity alert●Review the event log for Event 2031. (See Procedure 2: Stop creating share alert.) Once the capacity alert is triggered, the disk has reached capacity. The alert should be set as 20-25 percent free space. Once a volume has triggered the capacity alert, submit a request for change (RFC) to move data to another volume or to extend the volume.DependenciesAn alert must be configured to perform an action when a certain disk capacity threshold is reached.Technology RequiredPerformance Logs and Alerts in Windows Server 2003Operating QuadrantStorage Management SMFInfrastructure Role ClusterWeekly Process: Storage resource managementDescriptionStorage resource management (SRM) is a key process for ensuring that important storage devices, such as disks, are formatted and installed with appropriate file systems.In addition, SRM includes using management technologies to monitor storage resources to ensure that they meet availability, capacity, and performance requirements.Task: Review disk fragmentationPurposeDisk fragmentation occurs when files are written to non-adjacent clusters on a disk. During normal operations of a file server, the file system will become fragmented. Fragmentation has a performance effect on the read/write action. A disk that is highly fragmented requires several passes of the disk’s read and write heads to retrieve or store data to the disk.To analyze the extent of disk fragmentation on a volume and to remediate performance issues associated with fragmented files and free space, run the Disk Defragmenter utility. An effective use of the Disk Defragmenter utility should include removing unnecessary files from the volume. Prior to defragmenting a volume, run the Disk Cleanup utility to perform the following activities:●Remove temporary Internet files. ●Remove any downloaded program files (for example, Microsoft ActiveX? controls and Java applets downloaded from the Internet). ●Empty the Recycle Bin. ●Remove Windows temporary files. ●Remove Windows components that you are not using. ●Remove installed programs that you no longer use. Procedure 1: Analyze the volume1.Start Disk Defragmenter.2.Select the volume that you want to analyze, and then click Analyze.3.Click Save As to save the report. You can use Defrag.exe to schedule an analysis of the disk fragmentation on a volume. To output the report to a text file, the syntax for the command line is:defrag <volume> -a –v >filename.txtwhere filename is the name of the file that will contain the report.The Disk Defragmenter window displays the estimated disk usage before defragmentation. For more detail, use the command line above to perform this task. Based on the results of the report, either run disk cleanup and proceed to defragment the volume, or wait for the next scheduled defragmentation. If, after conducting several analyses of the volume for fragmentation, the results show there is no need to defragment the disk, then you may want to move the frequency of this task to once a month.Procedure 2: Clean up the volume1.Start Disk Cleanup.2.Select the volume that was analyzed in Procedure 1.3.Select the file types to delete and click OK.Cleanmgr.exe can be scheduled to run. Prior to scheduling the Cleanmgr.exe, you must specify which tasks you want performed during the disk cleanup. This can be accomplished by running the following command at the command line:cleanmgr /d driveletter: /sageset:n where driveletter is the volume that you want to clean up. 4.When you enter this command, the Disk Cleanup Settings dialog box appears. Select the file types you want removed and click OK.Now you can schedule the disk cleanup task you just created by running the following command from the command line or Task Scheduler:cleanmgr /sagerun:nProcedure 3: Defragment the volume1.Start Disk Defragmenter.2.Click the volume that you want to defragment, and then click the Defragment button.You can use Defrag.exe to schedule defragmentation on a volume. It is best to defragment a volume during low usage periods in order to reduce the effect the process has on file server performance. The syntax for the command line is:defrag <volume> -v >filename.txtwhere volume is the drive you want to defragment, and filename is the name of the file that will contain the defragmentation report. For a list of switches for the Defrag command, at the command prompt, type defrag.Dependencies●Administrator privileges are required to run Disk Defragmenter.●Defragmentation requires 15 percent free disk space. If there is low disk space, consider using the –f switch. This switch forces defragmentation even if free disk space is low.●Confirm that there is a good backup of the volume prior to performing defragmentation.●For more information on running Disk Cleanup from the command line, see the Automating Disk Cleanup Tool in Windows white paper available at .●Disk Defragmenter cannot be run on a volume that has Volume Shadow Copies activated. For more information, see the Shadow Copies May Be Lost When You Defragment a Volume white paper available at Required●Disk Defragmenter●Disk CleanupOperating QuadrantStorage Management SMFOperations Role ClusterAs NeededProcess: Data backup, restore, and recovery operationsDescriptionStoring, restoring, and recovering data are key storage management activities for maintaining company data. Data should be classified by type, and a strategy should be developed to ensure that backup and recovery processes fulfill business requirements and service level objectives.Task: Verify restorePurposeWhen restoring files and folders to the file system, it is important to verify the successful completion of the restoration task. Without verifying that data has been restored prior to directing users to the restore location, the integrity of the backup/restore process could be questioned by users. Procedure: Verify restore configuration tasks1.Start the Backup utility.2.On the Tools menu, select Reports.3.In the Backup Reports window, select the report that contains the Restore Job, and click View.4.Search the log for the “Operation: Restore” string.5.Verify that the restore location and restore files are in the location specified in the initial restore request.6.Use Windows Explorer to navigate to the location of the restore and verify that the data exists.DependenciesScheduled backups are being performed.Technology RequiredBackupOptimizing QuadrantCapacity Management SMFOperations Role ClusterDaily Process: Managing resources and service performanceDescriptionCapacity management is concerned with optimizing the use of IT resources in order to achieve the level of service performance agreed upon with the client. These resources are supplied by support organizations to ensure that the requirements of the business are met. The capacity management process can be either reactive or proactive. Iterative activities, such as monitoring, analyzing, tuning, and reporting, are also important in the process of managing resources and service performance. Each process requires different types of data. For example, managing IT resources involves documenting the usage levels of individual components in the infrastructure, whereas managing service performance records transaction throughput rates and response times.Task: Capture service performance statisticsPurposeDuring the normal operation of a file server, it is important to monitor the overall health of the server. This information will be used to review general performance, adherence to service level agreements (SLAs), and capacity planning and to create a baseline for the file server.Procedure: Create performance monitor logs1.Start System Monitor.2.Double-click Performance Logs and Alerts, right-click Counter Logs, and select New Settings.3.Enter the name for this log—for example, "Service Performance Statistics"—and click OK.4.On the General tab, click Add Counters and select the following counters:Processor PerformanceNotesProcessor\% Processor TimeThe percentage of elapsed time the processor spent executing instructions for processes or services. It reports the sum of the time the processors spent executing code in privileged mode and executing code in user mode. This counter provides an overall view of the processors' activity.Processor\% Privileged TimeThe percentage of elapsed time that the process threads spent executing code in privileged mode. The operating system switches application threads to privileged mode to allow direct access to the system’s kernel.System\Context Switches/secThe combined rate at which all processors on the computer are switched from one thread to another. Context switches occur when a running thread voluntarily relinquishes the processor, is preempted by a higher priority ready thread, or switches between user mode and privileged mode to use an executive or subsystem service. High rates of context switching can result from inefficient hardware or poorly designed device drivers or applications.Memory PerformanceNotesMemory\Pages/secPages/sec is the rate at which pages are read from or written to the disk in order to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays.Memory\Available megabytes (MB)Available megabytes is the amount of physical memory, in megabytes, immediately available for allocation to a process or for system use.Paging File\% UsageThe amount of the page file instance in use in work PerformanceNotesServer\Error SystemThe number of times an internal server error was detected. Unexpected errors usually indicate a problem with the server.Server\Work Item ShortagesThis occurs when no work item is available or can be allocated to service the incoming request. A work item is the location where the server stores a server message block (SMB). Work item shortages might be caused by an overloaded server. Server\Blocking Requests RejectedThe number of times the server has rejected blocking SMBs due to insufficient count of free work items. This counter indicates whether the MaxWorkItem or MinFreeWorkItems server parameters might need to be adjusted.Server Work Queues\Queue LengthQueue length is the current length of the server work queue for this CPU. A sustained queue length greater than four might indicate processor congestion. This is an instantaneous count, not an average over time.Disk PerformanceNotesPhysical Disk\Current Disk Queue LengthCurrent disk queue length is the number of requests outstanding on the disk at the time the performance data is collected. It also includes requests in service at the time of the collection. This is an instantaneous snapshot, not an average over time interval. Multi-spindle disk devices can have multiple requests that are active at one time, but other concurrent requests are awaiting service.This counter might reflect a transitory high or low queue length, but if there is a sustained load on the disk drive, it is likely that this will be consistently high. This counter requests experience delays proportional to the length of this queue minus the number of spindles on the disks. For good performance, this difference should average less than two.Physical Disk\Avg. Disk sec/readAvg. Disk sec/read is the average time, in seconds, of a data read from the disk.Disk PerformanceNotesPhysical Disk\Disk read bytes/secDisk read bytes/sec is the rate at which bytes are transferred from the disk during read operations.Physical Disk\Disk write bytes/secDisk write bytes/sec is the rate at which bytes are transferred to the disk during write operations.Physical Disk\Disk read/secDisk read/sec is the rate of read operations on the disk.Physical Disk\Disk write/secDisk write/sec is the rate of write operations on the disk.5.Fifteen is the default sampling interval. You can modify this number. Increasing the interval will reduce the size of the log file but at the risk of losing data. Decreasing the interval will increase the size of the log file and provide a more detailed look at the performance.6.On the Log Files tab, click the Log File Type drop-down box and select the output format. Choose a CSV file type if you want to be able to manipulate the data in Excel. The data can also be written to an SQL database format.7.Make sure the End File Names checkbox is checked. Use the year, month, and day format yyyymmdd.8.On the Schedule tab, click the Start Log At checkbox, and enter the start time for logging.9.Click the Stop Log At checkbox and enter the time at which logging should stop.10.Click Apply, and then click OK. The log files will be created in <system drive>\Perflogs by default. The log is activated and will have a green status, indicating that it is logging information based on the configuration and schedule.Dependencies●You must be a member of the administrators group or the performance log users group on the local computer, or have been delegated the appropriate authority.●There must be adequate space on the disk where the log files are being created.Technology RequiredSystem MonitorTask: Capture service usage statisticsPurposeDuring the normal operation of a file server, it is important to monitor service usage. Service usage differs from service performance by focusing on how users consume file server resources. This information will be used to review general performance, adherence to SLAs, and capacity planning and to create a baseline for the file server.Procedure: Create performance monitor logs1.Start System Monitor.2.Double-click Performance Logs and Alerts, right-click Counter Logs, and select New Settings.3.Enter the name for this log, such as "Service Performance Statistics," and click OK.4.On the General tab, click Add Counters and select the following counters:Logical Disk PerformanceNotesLogical Disk\% Free Space% Free Space is the percentage of total usable space that was free on the selected logical disk drive.Logical Disk\Current Disk Queue LengthCurrent Disk Queue Length is the number of requests outstanding on the disk at the time the performance data is collected. It also includes requests in service at the time of the collection. This is an instantaneous snapshot, not an average over time interval. Multi-spindle disk devices can have multiple requests that are active at one time, but other concurrent requests are awaiting service.This counter might reflect a transitory high or low queue length, but if there is a sustained load on the disk drive, it is likely that this will be consistently high. This counter requests experience delays proportional to the length of this queue minus the number of spindles on the disks. For good performance, this difference should average less than two.Logical Disk\Avg. Disk sec/readAvg. Disk sec/read is the average time, in seconds, of a data read from the disk.Logical Disk\Avg. Disk sec/writeAvg. Disk sec/write is the average time, in seconds, of a data write to the disk.Logical Disk\Disk read bytes/secDisk read bytes/sec is the rate at which bytes are transferred from the disk during read operations.Logical Disk\Disk write bytes/secDisk write bytes/sec is the rate at which bytes are transferred to the disk during write operations.Logical Disk\Disk read/secDisk read/sec is the rate of read operations on the disk.Logical Disk\Disk write/secDisk write/sec is the rate of write operations on the work PerformanceNotesServer\Server SessionsThe number of sessions currently active in the server. This value indicates current server activity.Server\Bytes Total/secThe number of bytes the server has sent to and received from the network. This value provides an overall indication of how busy the server is.Server\Files OpenThe number of files currently opened in the server. This value indicates current server activity.Volume Shadow Copy Service PerformanceNotesShadow Copies\% Disk Used by Diff Area FileSize of all diff area files on the input volume divided by volume.Shadow Copies\Allocated Space(MB)Space used in the shadow storage volume for all the shadow copies of the input volume.Shadow Copies\Nb of Shadow CopiesNumber of shadow copies of a volume.Shadow Copies\Used SpaceSpace used in the shadow storage volume for all the shadow copies of the input volume.Shadow Copies\Nb of Diff Area FilesTotal number of diff area files on a volume. This value is the same as the total number of shadow copies on the system whose shadow storage is configured on the input volume.Shadow Copies\Size of Diff Area FilesSize of all diff area files on the input volume.5.Fifteen is the default sampling interval. You can modify this number. Increasing the interval will reduce the size of the log file but at the risk of losing data. Decreasing the interval will increase the size of the log file and provide a more detailed look at the performance.6.On the Log Files tab, click the Log File Type drop-down box and select the output format. Choose a CSV file type if you want to be able to manipulate the data in Excel. The data can also be sent to a SQL database format.7.Make sure the End File Names checkbox is checked. Use the year, month, and day format yyyymmdd.8.On the Schedule tab, click the Start Log At checkbox, and enter the start time for logging.9.Click the Stop Log At checkbox and enter the time at which logging should stop.10.Click Apply, and then click OK. The log files will be created in <system drive>\Perflogs by default. The log is activated and will have a green status indicating that it is logging information based on the configuration and schedule.Dependencies●You must be a member of the administrators group or the performance log users group on the local computer, or have been delegated the appropriate authority.●Adequate space on the disk where the log files are being created.●Run Volperf.exe from the Windows Server 2003 Resource Kit to enable Volume Shadow Copy service performance counters. Technology Required●System Monitor●Volperf.exeOperating QuadrantService Monitoring and Control SMFInfrastructure and Operations Role ClustersDailyProcess: Perform monitoringDescriptionMonitoring is concerned with the real-time recording of critical data values on an ongoing basis. The aim of recording critical data values is to ensure that adequate management information is available in order to maintain a service or services at agreed-on levels of service performance or, or at a minimum, to be recovered quickly.Task: Review quota levelsPurposeMonitor quota levels on volumes and notify users when the volumes have exceeded assigned warning levels and quota limits.Procedure 1: View quota events in event log1.Start Event Viewer. 2.Right-click System, point to View, and click Filter. Use the following filter configuration:Event Source: NTFSFrom: Events OnTo: Events OnNote Use a consistent 24-hour period for reporting quota information.3.Sort the list by event. When a user exceeds his or her quota limit, Event 37 will be logged to the file. The user field will point to the user who has exceeded his or her quota threshold. The description field will point to the volume where the threshold was assigned.4.Note the user and volume where the quota threshold was exceeded. You can record this information in a program such as Microsoft Excel. Procedure 2: Viewing quotasThe user interface (UI) can be used to view quota entries on a volume.1.In Windows Explorer, click My Computer, right-click a volume, and then click Properties.2.In the Properties dialog box, click the Quota tab.3.On the Quota tab, click Quota Entries.4.Click the Status column to sort by status.5.For each user who has exceeded a quota threshold, record the following information in the spreadsheet created in Procedure 1, Step 4:Amount UsedQuota LimitWarning LevelPercent UsedServer and Volume where quota has been assignedProcedure 3: Notify users that quota thresholds have been exceededFor each user identified in Procedures 1 and 2, send an e-mail message notifying the user that he or she has exceeded the warning level or quota limit. Based on your operations environment, recommend steps to rectify the quota. Some options for resolving this situation are:●Delete non-business-essential data from the volume.●Request an increase in quota limit.Dependencies●Quotas enabled on volume.●Logging events to event log has been selected for exceeding warning levels and quota limits.Technology SuggestedMicrosoft ExcelChanging QuadrantConfiguration Management SMFInfrastructure Role ClusterMonthlyProcess: Reviewing configuration itemsDescriptionBecause the accuracy of the information stored in the configuration management database (CMDB) is crucial to the success of Change Management, Incident Management, and other SMFs, a review process should be established to ensure that the database accurately reflects the production IT environment.Task: Compliance check—verify that shares are created in the proper locationPurposeThis task ensures that shares created on file servers are in compliance with organization standards for the location of file shares. Procedure 1: Create server share report1.Create a custom MMC and add the Shared Folder snap-in.2.On the Actions menu, choose Export List to copy this information to a text file.3.To automate the procedure, the following sample script can be used to create a report of the folder shared on a file server and the path: strComputer = "."Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")Set colShares = objWMIService.ExecQuery("Select * from Win32_Share")For each objShare in colShares cscript.Echo "AllowMaximum: " & vbTab & objShare.AllowMaximum cscript.Echo "Caption: " & vbTab & objShare.Caption cscript.Echo "MaximumAllowed: " & vbTab & objShare.MaximumAllowed cscript.Echo "Name: " & vbTab & objShare.Name cscript.Echo "Path: " & vbTab & objShare.Path cscript.Echo "Type: " & vbTab & objShare.Type NextProcedure 2: Verify shared folders against shared folder location pare the path of the shared folder to the organization’s standard for shared folder creation in the CMDB.2.If there are any discrepancies in the shared folders report and the CMDB, submit an emergency RFC CAB/EC.DependenciesCMDBTechnology RequiredWindows Management Instrumentation (WMI)Supporting QuadrantProblem Management SMFOperations Role ClusterDailyProcess: Problem recording and classificationDescriptionThis process deals with the recording and classification of a problem, which can originate from a variety of sources and media. Problems may be reported through the incident management process or as a result of analysis from the data collected by the problem management team. Additionally, other SMF teams, such as availability management and capacity management, might detect problems and pass this information to the problem management team. It is important that all problems be linked to existing incidents and that each problem be recorded in order to prioritize its resolution. Once a problem has been recorded, it is assessed against the business impact of the problem and the urgency of the required solution. This assessment determines the problem classification.Task: Review daily problem management reportPurposeEnsure that the appropriate resources and priority levels have been assigned to current problems. The report should include the status of any problems from the previous day or any that occurred overnight.Procedure 1: Check file server statusThe status of each file server should be included in this report. This information could include:●Whether the file server is online.●Whether there any approved RFCs pending.●Status of last backup.●Confirmation that the performance monitor log from the previous day exists for the server.●Current capacity for storage disk.●Status of DFS links.Procedure 2: Review problems transferred from incident managementReview each incident that has been transferred to the problem management team.Procedure 3: Prioritize and assignBased on the information received on the status of the file server and any problems transferred from incident management, set the priority of each problem and assign the appropriate team members.DependenciesFile server monitoringTechnology RequiredNoneSupporting QuadrantIncident Management Control SMFSupporting Role ClusterWeeklyProcess: Investigation and diagnosisDescriptionThis process deals with investigating an incident and gathering diagnostic data. Its aim is to identify how the incident can be resolved as quickly as possible.The process allows for management escalation or functional escalation if either becomes necessary in order to meet SLA targets.Task: Create weekly service activity reportPurposeThis task provides a high-level report on a service request from the perspective of when it was opened, when it was closed, and how long it took to resolve. The organization may have an SLA on the time it takes a customer to receive a response from the incident management team once an incident has been reported. Managers and leads can use such data to better balance the workload of the incident management team.Management can also use the service activity report to measure the effectiveness and efficiency of the incident management staff itself. This information is important to the members of the incident management team because it shows how long cases have been open. This helps to determine which cases must be addressed next. The following is an example of some of the information that can be included in the activity report:●Total number of cases opened.●Total number of cases closed.●Number of cases closed on first contact with the incident management team.●Number of days a case has been open.Procedure: Create report metricsThe method used to collect the data is dependent on the features of an organization’s incident tracking solution. But however the organization collects the data, it should include the following information: ●Total number of cases opened. This metric is collected for individual members of the team as well as for the whole team. It highlights the volume of incidents being opened regarding file services. When this metric is compared to metrics showing how many cases have been closed, how long it took to close them, and how many of them were closed on first contact, it helps the team to assess its overall effectiveness.●Total number of cases closed. This metric is collected for individual members of the team as well as for the whole team. It highlights the volume of incidents being closed regarding file services. This metric is critical when evaluating the progress of the incident management team. Open cases that must be carried over to another week require additional incident management from the case owner.●Number of cases closed on first contact with the incident management team. This metric is collected as the total for the team. It enables the incident management team to determine the effectiveness and efficiency of the incident management process and can directly impact customer satisfaction. When cases are closed on the first call, it reduces the number of cases incident management team members have to manage.●Number of days a case has been open. This metric is collected for individual members of the team as well as for the whole team. Cases that remain open for extended periods have a negative impact on customer satisfaction. The incident manager can use this metric to identify possible areas in which the incident management team may require training or education. In addition, cases that remain open for long periods may be better handled by the problem management team. It is important to identify these types of cases and to provide the incident owner with additional resources or to escalate the issue to the problem management team.Dependencies●Incident ticketing system.●Incidents are responded to on a daily basis.●An SLA on how an incident is handled and when an incident is escalated to the problem management team.Technology Required●Third-party tools are available that provide incident management ticketing functionality.●Reports can be built from an Access or SQL Server database.Supporting QuadrantIncident Management SMFOperations Role ClusterMonthly Process: Incident closureDescriptionThis process ensures that the customer is satisfied that the incident has been resolved prior to closing the incident record.Incident closure also checks that the incident record is fully updated and assigns a closure category.Task: Roll up activity report into monthly metricPurposeThis task provides metrics to assist in planning staffing levels and checking the incident management function against other SMFs. These reports can be used by those involved with other SMFs (such as Service Level Management, Financial Management, and Workforce Management) as well as by members of the six MOF Team Model role clusters.Procedure: Create monthly metricThis task should produce a report showing the cost of incident management and indicating where resources should be allocated to optimize its performance.●Percent closed incidents. This metric is created by taking the total number of cases closed for a month and dividing it by the number of cases opened for the month.●Percent incidents closed on first contact. This metric is created by taking the total number of cases closed on first contact and dividing it by the number of cases opened for the month.●Mean time to resolution. This metric measures the effectiveness of the incident management process. It is calculated by taking the total time spent on incident resolution and dividing it by the total number of cases closed. SLAs can be compared to this metric The numbers reported should then be used to evaluate the incident management process and to determine how effectiveness and efficiency can be improved.Dependencies●Respond to daily service request.●Weekly service request activity report.Technology RequiredNoneOptimizing QuadrantCapacity Management SMFInfrastructure Role ClusterWeekly Process: Managing resources and service performanceDescriptionCapacity management is concerned with optimizing the use of IT resources in order to achieve the level of service performance agreed upon with the client. These resources are supplied by support organizations to ensure that the requirements of the business are met. The capacity management process can be either reactive or proactive. Iterative activities, such as monitoring, analyzing, tuning, and reporting, are also important in the process of managing resources and service performance. Each requires different types of data. For example, managing IT resources involves documenting the usage levels of individual components in the infrastructure, whereas managing service performance records transaction throughput rates and response times.Task: Captures size of DFS namespacePurposeThis task captures the size of the DFS namespace for an organization and reports to the problem management team those DFS roots that are approaching namespace size limits.Procedure 1: Create DFS namespace reportWindows Server 2003 Support Tools include the Dfsutil.exe which can be used to capture the size of the Windows Server 2003 DFS namespace and mixed mode domain DFS namespace. The following command will export the links: Dfsutil /Root:\\dfsname\root /Export:<drive path><filename>where drive path is the path to the folder where the report will be stored, and filename is the name of the file that will contain the report.Procedure 2: Automate DFS namespace report This capture process can be automated using Windows shell scripting. The following is a simple command that can read as an input file. Each line of the input file lists a domain DFS root:for /f %%i in (input.txt) do dfsutil.exe /view:%%i ><filename.txt>where filename is the name of the file that will contain the report.Procedure 2: Review size of DFS namespace●Compare the DFS namespace size with the following limits on DFS namespace:●The maximum size of a single domain DFS namespace is 5 MB of metadata.●A stand-alone DFS can have as many as 50,000 links. DependenciesWindows Server 2003 Resource Kit installed on server running Dfsutil.Technology Required●Dfsutil.exe●Windows Script HostOptimizing QuadrantCapacity Management SMFOperations and Infrastructure Role ClustersMonthly Process: Managing resources and service performanceDescriptionCapacity management is concerned with optimizing the use of IT resources in order to achieve the level of service performance agreed upon with the client. These resources are supplied by support organizations to ensure that the requirements of the business are met. The capacity management process can be either reactive or proactive. Iterative activities, such as monitoring, analyzing, tuning, and reporting, are also important in the process of managing resources and service performance. Each requires different types of data. For example, managing IT resources involves documenting the usage levels of individual components in the infrastructure, whereas managing service performance records transaction throughput rates and response times.Task: Create quota reportPurposeManaging disk space consumption is vital to providing file services to customers. The disk quota support feature in Windows Server 2003 provides a way to manage quotas on a volume. Once quotas have been set on a volume, monitoring and reporting quota usage should be part of the regular server maintenance schedule.Procedure 1: Creating quota reports using FsutilWindows Server 2003 comes with command line utilities that can be used to gather information on the file system. Fsutil.exe can be used to query a volume for quota entries. To obtain quota information for volumes on a server, Fsutil must be run for each volume on a file server.From a command prompt, type:fsutil quota query <volume path> >filename.txtwhere volume path is the path to the volume you want to query and filename is the name of the file that will contain the report.The report will contain quota information regarding the volume path entered. From the report you can determine:●If quotas are being tracked on a volume.●If logging for quota events is enabled on a specific user's volume (SID Name).●How much of a quota was used.●Quota threshold.●Quota limit. Procedure 2: Create a quota report using Windows Script HostThe following sample script uses Windows Script Host to create a disk quota usage report. The Script is saved as a .vbs file and run from the command line. The output may be redirected to a CSV file for reporting. strComputer = "."Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")Set colQuotas = objWMIService.ExecQuery _ ("Select * from Win32_DiskQuota")For each objQuota in colQuotas cscript.Echo "Volume,User,Disk Space Used,aLimit,Status,Warning Limit" cscript.Echo objQuota.QuotaVolume & “,” & objQuota.User& “,” &objQuota.DiskSpaceUsed & “,“ & objQuota.Limit & “,” & objQuota.Status & “,” & objQuota.WarningLimitNextWhen you view the quota status, there are three levels that can be reported: 0 (below quota warning level), 1 (above quota warning level), 2 (above quota limit).Dependencies●Quotas set on a volume.●Log event when a user exceeds his or her quota limit is checked.●Log event when a user exceeds his or her warning level is checked. Technology Required●Fsutil.exe●Windows Script Host Task: Create a service performance and usage reportPurposeThis task converts the performance service data into a report that can be used to support decision making.Procedure 1: Calculate daily statistics 1.Import performance logs into Excel.2.Calculate the daily average for each counter collected in the log.3.In a new worksheet, record the daily average of the counters for each day of the month.4.Use Excel’s graphing feature to create visuals that illustrate trends in performance.For clarity, it may be easier to calculate the daily statistics on a per-performance object basis. You should also consider that these reports will be used in evaluating SLAs, OLAs, and UCs.Procedure 2: Store data and reports1.Store each month’s data in a single workbook for future reference.2.Save the workbook to a file share on a file server that is under regular backup maintenance.DependenciesPerformance logs documenting the serviceTechnology RequiredMicrosoft Excel or third-party spreadsheet applicationTask: Create a system load and utility reportPurposeConvert the file server resources usage data into a report that can be used to support decision making and resource allocation.Procedure 1: Calculate daily statistics 1.Import service usage performance logs into Excel.2.Calculate the daily average for each counter collected in the log.3.In a new worksheet, record the daily average of the counters for each day of the month.4.Use Excel’s graphing feature to create visuals that illustrate trends in performance. For clarity it may be easier to calculate the daily statistics on a per-performance object basis. You should create the following reports:●Logical disk usage report●Network usage report●Volume Shadow Copy service usageProcedure 2: Store data and reports1.Store each month’s data in a single workbook for future reference.2.Save the workbook to a file share on a file server that is under regular backup maintenance. DependenciesPerformance logs documented on file serversTechnology RequiredMicrosoft Excel or third-party spreadsheet applicationOperating QuadrantDirectory Services Administration SMFOperations Role ClusterDaily Process: Managing the directoryDescription Managing directory services involves the day-to-day process of providing for the safety, security, and functional operation of software and hardware components. Ensuring the safety and security of the hardware and software components is a critical issue and is covered in great detail in the MOF security document.Task: Check status of DFSPurposeAn implementation of Distributed File System (DFS) in an organization can contain multiple DFS roots, links, and target servers. It is important to understand the status of the DFS environment and report the status on a daily basis.Procedure 1: View DFS status1.Start DFS management console.2.Check the status for each DFS root and link. DFS roots and links can have the following status:●A blue check mark on the root or link indicates that the root or link can be reached, as well as all of the targets.●A yellow exclamation point on the root or link indicates that the root or link can be reached but that not all of the targets may be reachable, either because DFS referral is disabled on the target or there is some other problem preventing access to the target.●A red cross on the root or link indicates that the root or link cannot be reached.When you check the status of the target server, it will be either online or offline. Offline indicates that the target cannot be reached.3.Right-click the target, and click Status.When you check the status of the target server, it will be either online or offline. Offline indicates that the target cannot be reached.Procedure 2: Check status of serverThese steps should be followed when the status of a DFS root server or a target server is reported as offline.1.Run the PING utility from the command line or attempt connection using Terminal Service, or mounting an administrative share such as C$, to verify that the offline server can receive network communication. If this test fails, escalate the issue to the problem management team to get the server online.2.Run Srvinfo.exe using the following command to make sure all the proper services are running srvinfo\\<servername>, where servername is the server reported as offline.3.Log on to the server locally or over the network. Verify that you can log on to the server.Procedure 3: Check DFS replication status for DFS roots having multiple targets (domain DFS)If you have configured domain DFS roots and have configured links with replicas, follow these steps to ensure that replication is enabled on the target replicas, since File Replication Services (FRS) handles replication of domain DFS. FRS causes high replication traffic and must be carefully scheduled to occur only during times of low network utilization.1.Start the DFS management console. 2.Highlight the links that have been enabled for replication. A blue circular icon will be over links that have been enabled for replication.3.Click Action and choose Show Replication Information. This will add a column in the right pane called File Replication. This will inform you if replication is enabled for this link.4.To check the replication schedule, highlight the link, and click Properties. Dependencies●Domain DFS roots configured in the enterprise.●Links with multiple targets have been configured to replicate. Technology Required●DFS MMC●Srvinfo.exeSupporting QuadrantIncident Management SMFSupport Role ClusterDaily Process: Investigation and diagnosisDescriptionThis process deals with investigating incidents and gathering diagnostic data. Its aim is to identify how an incident can be resolved as quickly as possible.The process allows for management escalation or functional escalation if either becomes necessary in order to meet SLA targets.Task: Respond to daily service requestPurposeMake sure all incidents are answered and there is an incident owner responsible for the incident life cycle. This serves the organization in two ways:●Customers understand that when an incident is reported, they will receive confirmation that someone from the incident management team has reviewed the request. This ensures that customers will continue to use the incident support channel set up in the organization.●Each incident has an owner responsible for collecting background information and doing preliminary troubleshooting. The owner is responsible for contacting other technical specialists to assist the customer in resolving the incident, documenting the incident, and making sure contributing technicians add their comments to the incident request. This ensures that there is a single point of contact for the incident from both the customer's and the organization's perspective.Procedure 1: Acknowledge receipt of service request1.Send the customer e-mail confirming receipt of an incident request.2.Give the customer an incident case number prior to collecting data and troubleshooting the incident.Procedure 2: Document incident●Document the problem, the system affected, actions taken to troubleshoot the problem, and the plan to resolve the incident. The following are systems that can be affected in a file server environment:●File server●Share●Permissions●DFS●File Replication Services (FRS)●Volume Shadow Copy service ●Disk capacityProcedure 3: Update customer on status of incident●Send the customer e-mail, confirming the problem, system affected, actions taken to troubleshoot the problem, and the current plan to resolve the incident. If another technician is involved in troubleshooting, make sure the technician's notes are included as part of the case documentation.Procedure 4: Close incident●If the incident is not resolved following the customer’s initial request for incident management, follow up with the customer and other technicians until the incident is resolved.Dependencies●Incident ticketing system.●An SLA on the means that customers can use to request incident management—for example, through e-mail or a service phone number. Technology Required●There are third-party tools that provide incident management ticketing functionality.●A Microsoft Access or SQL Server? database can also be used to create incident tickets.Changing QuadrantChange Management SMFInfrastructure Role ClusterDaily Process: Change classification and authorizationDescriptionAfter an RFC has passed the initial screening, the change manager must classify and authorize the RFC. The category assigned to the RFC is a reflection of the impact the change is likely to have on the IT environment. The priority level set for an RFC is a reflection of its urgency, and it determines how quickly the change advisory board (CAB) will review it.There are four change categories: minor, standard, significant, and major. There are also four categories of priority: low, medium, high, and emergency. Once an RFC has been classified, it must be authorized. The process of authorizing a change request depends on the category and priority of the change:●Emergency priority changes are escalated to the CAB/EC for fast-track approval.●Standard changes are approved automatically and progress directly to the change development and release phases.●Minor changes can be approved by the change manager without reference to the CAB.●All other changes must be approved by the CAB.The two tasks that follow—attending a CAB meeting and reviewing an emergency change request—are among several tasks that would be associated with classification and authorization. Attending a CAB meeting is singled out because it is common to much of the change process. Reviewing an emergency change request is singled out because emergency changes typically involve high risk and require a great outlay of time and resources. More information about the other tasks, and about the change management process in general, is available at . Task: Attend CAB meetingPurposeThe CAB meets to review significant and major changes to the operations environment. From a file service perspective, change requests involving disk capacity, replication, and registry modifications, as well as updating antivirus software or adding a new file server to the environment, can be evaluated at this weekly meeting.It's important for a representative of the Infrastructure Role Cluster to attend the meeting in order to participate in the change management process. Participating in the process could include providing additional data regarding a particular file service RFC that members of the CAB may not have available to them. Additionally, it is important to be informed about other RFCs that may have an indirect effect on the delivery of file services and to consider these effects when approving an RFC for change development.Procedure: Attend change review board meeting1.Regularly attend the CAB meeting.2.Consider the effect that any RFC may have on file service configuration items:●File server hardware●Domain controller hardware●Hardware vendor●Server role (file server or domain controller?)●Windows Server 2003 software●Service packs●Hotfixes●Antivirus software●Monitoring software●Backup software●Processes and procedures●Documentation Dependencies●A process must be established to initiate a change request in the operations environment.●An identified CAB. Technology RequiredOperations team educated about MOF/ITIL.Task: Review emergency change requestPurposeProvide guidance to the change advisory board emergency committee (CAB/EC) on processing an emergency RFC. The number of emergency change requests should be kept to a minimum because they typically involve high risk and require a great outlay of time and resources. Emergency changes to file services can have a great impact on a large number of users and they can affect business processes that depend on file services. For this reason it is very import to create a change request process that emphasizes prioritizing and attending to urgent problems associated with file services. The Infrastructure Role Cluster is responsible for this task, but the request for emergency change can be initiated by any of the six Team Model role clusters. An emergency change request could involve the release of updates to the operating system, third-party applications, or configuration changes.Procedure: Contact CAB/EC1.Confirm that the server has a successful server backup before contacting the CAB/EC members.2.Select CAB/EC members. This should include standing members of the CAB as well as those members who can give the greatest guidance to file services.3.Notify the CAB/EC of the emergency change request. Each member of the CAB/EC identified in Step 2 must be notified of the emergency change request. It is important that every attempt be made to contact each member of the CAB/EC either by e-mail, mobile device, or other communication methods. The member should be given an expected time in which to respond to the emergency change request and general information about the change request.4.Review the RFC. Collect all information pertaining to changes to file services including asking for additional information from the change initiator. The CAB/EC should consider the impact that the change has on file services and weigh the risk associated with making an emergency change to the file system versus making a standard change. The type of change that could be made include:●Applying service packs.●Adding a new file server.●Adding a new DFS root.●Adding new partitions.●Adding new disks.●Adjusting quota setting above policy.●Modifying Volume Shadow Copy service schedule.●Changing backup and restore procedures.●Modifying and applying policies.●Changing other existing settings.●Changing a process or script used to administer servers. Along with change type, collect the configuration item that will be affected by the change. Configuration items are objects that are subject to change. Any item that has the possibility of changing falls under change management. For file servers, these items include:●File server hardware●Domain controller hardware●Hardware vendor●Server role (file server or domain controller?)●Windows Server 2003 software●Service packs●Hotfixes●Antivirus software●Monitoring software●Backup software●Processes and procedures●Documentation●RFCs Dependencies●A process must be established to initiate a change request in the operational environment.●An identified CAB/EC roster and individuals who are contacted for emergency changes as they relate to file services. Technology RequiredOperations team educated about MOF/ITIL.Changing QuadrantConfiguration Management SMFInfrastructure Role ClusterWeekly Process: Reviewing configuration itemsDescriptionBecause the accuracy of the information stored in the configuration management database (CMDB) is crucial to the success of Change Management, Incident Management, and other SMFs, a review process should be set up to ensure that the database accurately reflects the production IT environment.Task: Capture configuration snapshotPurposeThis task configures the file server to provide a point-in-time view of the file server.Procedure 1: Run Srvinfo for all file serversTo get system information, shared folders, disk capacity, services currently running, network protocols, and system uptime use Srvinfo.exe.At the command line run:srvinfo –s \\<servername .><drive path>:\servername-date.txtSrvinfo is scriptable and can be easily automated to facilitate batch processing for large environments.Procedure 2: Export local security polices1.Start Local Security Policies MMC and expand the Local Policies node.2.Highlight Audit Policy, then right-click and select Export List. Save the export to a secure file share.3.Highlight User Right Assignment, then right-click and select Export List. Save the export to a secure file share.4.Highlight Security Options, then right-click and select Export List. Save the export to a secure file share.Procedure 3: Create report of installed softwareThe following sample script can be used to create a report of the software installed on the file server using Windows Installer. SMS is a good alternative to enumerate installed applications, window components, and patches. This is especially useful for large in environments.Set objFSO = CreateObject("Scripting.FileSystemObject")Set objTextFile = objFSO.CreateTextFile("filename", True)strComputer = "."Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")Set colSoftware = objWMIService.ExecQuery _ ("Select * from Win32_Product")objTextFile.WriteLine "Caption" & vbtab & _ "Description" & vbtab & "Identifying Number" & vbtab & _ "Install Date" & vbtab & "Install Location" & vbtab & _ "Install State" & vbtab & "Name" & vbtab & _ "Package Cache" & vbtab & "SKU Number" & vbtab & "Vendor" & vbtab _ & "Version" For Each objSoftware in colSoftware objTextFile.WriteLine objSoftware.Caption & vbtab & _ objSoftware.Description & vbtab & _ objSoftware.IdentifyingNumber & vbtab & _ objSoftware.InstallDate2 & vbtab & _ objSoftware.InstallLocation & vbtab & _ objSoftware.InstallState & vbtab & _ objSoftware.Name & vbtab & _ objSoftware.PackageCache & vbtab & _ objSoftware.SKUNumber & vbtab & _ objSoftware.Vendor & vbtab & _ objSoftware.VersionNextobjTextFile.CloseThis sample script may produce an error if there are no MSI installed applications.Procedure 4: Create report of installed hotfixesThe following sample script can be used to create a report of the software installed on the file server using the Windows Installer:strComputer = "."Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")Set colQuickFixes = objWMIService.ExecQuery _ ("Select * from Win32_QuickFixEngineering")For Each objQuickFix in colQuickFixes cscript.Echo "Computer: " & objQuickFix.CSName cscript.Echo "Description: " & objQuickFix.Description cscript.Echo "Hot Fix ID: " & objQuickFix.HotFixID cscript.Echo "Installation Date: " & objQuickFix.InstallDate cscript.Echo "Installed By: " & objQuickFix.InstalledByNextDependenciesMSI installed softwareTechnology Required●Srvinfo.exe●Local security policies●Windows Management Instrumentation (WMI) 4Processes by MOF Role ClustersThis chapter is designed for those who want to see all processes for a single role cluster in one place. The information is the same as that in the previous two chapters. The only difference is that the processes are ordered by MOF role cluster.Operations Role ClusterDaily ProcessesProcess 1: Problem recording and classificationTask: Review daily problem management reportProcess 2: Managing resources and service performanceTask 1: Capture service performance statisticsTask 2: Capture service usage statisticsWeekly ProcessesProcess 1: Incident closureTask: Roll up activity report into monthly metricProcess 2: Design for recoveryTask: Update automated system recovery (ASR) backupMonthly ProcessesProcess: Managing resources and service performanceTask 1: Create quota reportTask 2: Create a quota report using WMITask 3: Create system performance and usage reportQuarterly ProcessesProcess: Data backup, restore, and recovery operationsTask: Verify restoreAs-Needed ProcessesProcess: Data backup, restore, and recovery operationsTask: Verify restoreSupport Role ClusterDaily ProcessesProcess 1: Data backup, restore, and recovery operationsTask: Run daily incremental backupProcess 2: Investigation and diagnosisTask: Respond to daily service requestWeekly ProcessesProcess 1: Investigation and diagnosisTask: Create weekly service activity reportProcess 2: Data backup, restore, and recovery operationsTask: Run weekly normal backupMonthly ProcessesThere are no monthly processes for this role cluster.As-Needed ProcessesThere are no as-needed processes for this role cluster.Release Role ClusterDaily ProcessesThere are no daily processes for this role cluster.Weekly ProcessesThere are no weekly processes for this role cluster.Monthly ProcessesThere are no monthly processes for this role cluster.As-Needed ProcessesThere are no as-needed processes for this role cluster.Infrastructure Role ClusterDaily ProcessesProcess 1: Maintaining the directoryTask: Back up DFS namespace configurationProcess 2: Storage resource managementTask: Monitor available disk spaceProcess 3: Maintaining the directoryTask: Back up DFS namespace configurationProcess 3: Perform monitoringTask 1: Review quota levelsTask 2: Verify previous day's backup jobProcess 4: Change classificationTask: Review emergency change requestWeekly ProcessesProcess 1: Storage resource managementTask: Review disk fragmentationProcess 2: Managing resources and service performanceTask: Capture size of DFS namespaceProcess 3: Change classification and authorizationTask: Attend change management review board meetingProcess 4: Reviewing configuration itemsTask: Capture configuration snapshotMonthly ProcessesProcess: Managing resources and service performanceTask 1: Create a quota reportTask 2: Create a service performance and usage reportTask 3: Create a system load and utility reportAs-Needed ProcessesThere are no as-needed processes for this role cluster.Security Role ClusterDaily ProcessesThere are no daily processes for this role cluster.Weekly ProcessesThere are no weekly processes for this role cluster.Monthly ProcessesThere are no monthly processes for this role cluster.As-Needed ProcessesThere are no as-needed processes for this role cluster.Partner Role ClusterDaily ProcessesThere are no daily processes for this role cluster.Weekly ProcessesThere are no weekly processes for this role cluster.Monthly ProcessesThere are no monthly processes for this role cluster.As-Needed ProcessesThere are no as-needed processes for this role cluster.5TroubleshootingOverviewThe following table contains troubleshooting tips that should be useful in maintaining this product. The tips are based on known issues and follow the best practices for troubleshooting and problem management outlined by the Incident Management and Problem Management SMFs, both of which are found in the MOF Supporting Quadrant.Problem #1: “Path not found” or empty folderDescription of ProblemWhen you work with a DFS shared folder, you may receive a “Path Not found” error message or you may see an empty folder. This may happen when you try to do the following:●Try to open a file in a DFS shared folder.●Try to find a file in a DFS shared folder.Cause of ProblemThis problem occurs when the DFS link portion of the path contains more than one long path element, and at least one long path element is referred to by the short path-name equivalent.Resolution of ProblemThere are two ways to resolve this problem:●Always use long path names when you work with shared folders that are under DFS links containing long path elements.●Use short path elements only when you create DFS links.Problem #2: Slow connection timeDescription of ProblemWhen you use DFS, your clients might take a long time to be connected to one of the DFS servers. This can occur under the following conditions:●The site the client is in does not have a DFS server for the volume in question.●Your network is not fully routed.●The client is not site-aware.Cause of ProblemThis problem is caused when the DFS referral server provides a list of servers to the client. If there is no DFS server with this volume in the site of the client, the list is sorted randomly. The client walks this list until it finds the first accessible server. If the percentage of unreachable servers is high and there are many servers, finding the first working server can take several minutes.Resolution of ProblemSeveral methods can be used to resolve this problem: ●Use a user account from the local domain to log on. A local DFS server should be found for the policy access.●Install DFS servers in the location where the logon is made.●Extend the network to be fully routed. Note: this may be very expensive.●If no access to local data is required, the users can log on to terminal servers in locations where a DFS server with the desired volume is located.Problem #3: How to troubleshoot FRS-enabled DFS directoriesDescription of ProblemThese are generic steps that can be used to troubleshoot File Replication Services (FRS)-enabled DFS directories.Resolution of Problem1.Verify Microsoft Active Directory? directory service replication. Active Directory replication must be fully functional between hub and spoke data centers by running repadmin/showreps against all computers in the hub and branch sites. Pipe the output so you can identify the tool and computer being targeted. Do not proceed until Active Directory is functional.2.Verify FRS dependencies3.Run Ntfrsutl ds against all hub and branch data centers.In this step, you are looking for missing FRS objects or attributes—including FRS member, FRS subscriber, and ServerRef. Compare Ntfrsutl output to a working computer until you are sure what to look for.4.Verify replication topology and schedule5.Verify that all computers are included in the replication topology (compare against a reference list to see that computers deployed in the field are known in the data center This information should be contained in the configuration management database (CMDB).6.Examine the environment for any known configurations that generate excessive replication of FRS replicated files (DFS + SYSVOL). General symptoms of this problem are:●The revision number for policy is constantly incrementing. In extreme cases, the number of revisions shows hundreds or even thousands of changes.●FRS replicated content is replicated excessively with no apparent change to the Group Policy or the files being replicated. In the case of SYSVOL, an excessive number of full syncs of policy take place for no apparent reason.●The number of files in the staging directory constantly grows, then empties when the replication schedule opens and replication can take place.●The number of files in the staging directory constantly grows but never empties if changes to downstream partners cannot be replicated either because of network connectivity or some other error condition.●Network traffic between replication partners is consuming excessive network bandwidth and FRS is identified as the source.●Excessive disk I/O until FRS service is stopped.Problem #4: Using Defrag.exe on a disk that hosts FRS-replicated contentDescription of ProblemWhen using the Disk Defragmenter tool (Defrag.exe) on a disk that hosts FRS-replicated content, the following symptoms may occur:●Files in SYSVOL and DFS shares are replicated excessively and there is no apparent change to the files.●File may replicate during off-peak hours, but at regularly occurring times if you schedule disk defragmentation to run during specific time or periods of low server usage.●The number of files in the staging folder constantly grows, and then empties after the disk defragmentation utility is completed, or the FRS schedule opens to allow replication.●The number of files in the staging folder constantly grows but never empties if changes to downstream partners cannot be replicated either because of network connectivity or some other error conditions.●Network traffic between replication partners is consuming excessive network bandwidth as a result of FRS.Resolution of Problem1.Search the NTFRS outbound log by using the Ntfrsutl and the iologsum.cmd (included in the Windows 2003 Support Tools).2.Identify the computer that is originating the excessive updates, and then use Ntfrsutl to empty the FRS outbound log. From a command prompt type:Ntfrsutl outlog>outlog.txt3.Use the iologsum.cmd FRS troubleshooting utility to structure the outlog file that was just created. At the command prompt, type:iologsum –sort=eventtime outlog.txtNote Use iologsum.cmd /? to get a list of the switches to use to summarize the pending change orders.Problem #5: DFS links not visibleDescription of ProblemWhen you view the DFS root in the DFS snap-in, all the DFS links are listed. However, when you connect to the DFS root share, none of the DFS links may be visible. When you browse the DFS root folder on the server, the local file system placeholders that represent the DFS links may be missing.Cause of ProblemThis issue can occur if the 8.3 file name creation functionality is disabled in NTFS.Resolution of Problem1.Start Registry Editor (Regedt32.exe).2.Locate the NtfsDisable8dot3NameCreation value under the following key in the registry:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem3.On the Edit menu, click DWORD, press 0 and then click OK.4.Quit Registry Editor and restart the server.Problem #6: DFS root does not appear in MMCDescription of ProblemWhen you start DFS MMC, the DFS root does not appear. If you try to locate the DFS root in DFS MMC, you receive the following error message: ●The specified domain either does not exist or could not be contacted.●You receive this error message if you enter the name either as a NetBIOS host name or as a fully qualified domain name (FQDN).●If you use the Dfscmd tool to add a DFS link, you receive the following error message: dfscmd /map \\dfsname\dfsshare\path \\server\share\path returns "System error 2662 has occurred".Cause of ProblemThis problem may occur if the following registry key is set on one or all of the servers that are hosting the DFS namespace after the namespace has already been defined: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DFSValue name: DfsDnsConfigType: REG_DWORD Value: 1Resolution of Problem1.Start Registry Editor (Regedt32.exe)2.Locate and then click the following key in the registry:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DFS3.Double-click the DfsDnsConfig value, press 0 in the Value data box, and then click OK.4.Quit Registry Editor.Problem #7: NTFS file system log file size bottlenecksDescription of ProblemThese are steps to identify and troubleshoot NTFS file system log file size bottlenecks.Cause of ProblemWhen a Windows Server 2003 file server is under a heavy load or stress, the NTFS file system log file may not flush the disk in time and, as a result, becomes full. NTFS file system operations do not continue until the log file is completely flushed to disk.Resolution of Problem1.Check the performance logs and alerts to view the activity of the “Current Disk Queue Length” for the appropriate PhysicalDisk object.If the performance data shows constant high disk queue length values that intermittently drop to a value of exactly one for a while and then increase again, this is an indication that the NTFS file system log file is full and must be flushed.2.Increase the performance of the disk subsystem.●Install additional disks, or upgrade the existing hard disks. Update the bus and disk controllers.●Use striped volumes on several physical disks to increase throughput.3.Increase the NTFS log file size. To do so, type the following at a command prompt: chkdsk /l:<size> If stress on the disk subsystem continues to be high, the log file may eventually become full again. Use this method if stress on the disk system is temporary.Problem #8: Excessive CPU use by Clussvc.exe or Rsrcmon.exe Description of ProblemIf you define hundreds of file share resources within a cluster, one or more nodes in the cluster may begin to provide reduced performance. When this occurs, Task Manager may report excessive CPU use by either the Clussvc.exe or Rsrcmon.exe processes.On some clusters, several hundred resources may consume enough overhead to impact performance. The number of total resources that a cluster may process without significant overhead varies based on the abilities of the hardware.Cause of ProblemOn some clusters, several hundred resources may consume enough overhead to impact performance. The number of total resources that a cluster may process without significant overhead varies based on the abilities of the hardware.Resolution of ProblemThe most efficient way to create many file shares on a cluster is to create sub-folder shares because this option can significantly reduce the number of resources and overhead. This option also simplifies administration and disaster recovery.If you must use individual file share resources for several hundred shares, it may be necessary to add more CPUs or memory to the server.Problem #9: "A DFS root already exists in this cluster node"Description of ProblemAfter you use the Cluster Administrator tool to configure a file share resource as a DFS root, you may receive the following error message:Cluster Administrator Standard Extension: An error occurred attempting to set properties: A DFS root already exists in this cluster node. Error ID: 5088 (000013e0).Cause of ProblemThis issue can occur if a DFS root is already configured on either of the nodes and has not been deleted. DFS permits only one root per server cluster.Resolution of Problem●If the DFS root is not a cluster resource, in DFS Manager, right-click the root configured, and then click Delete Root.●If the DFS root is a cluster resource, take the resource offline, configure it to be a normal share, and then bring the resource online or delete the resource if the share is not needed.Note To update the DFS root settings on the other nodes, move the group that contains the old DFS root to the other node.Problem #10: DNS name problemsDescription of ProblemAfter you log on to a Windows 2000-based computer with cached credentials and then connect to a network (either by using remote access or by re-attaching the network cable) while you attempt to connect to a domain-based DFS root, you experience the following symptoms:●You receive the following error messages: A duplicate name exists on the network. The network name could not be found.●The domain DNS name for your server and the NetBIOS name for your server may not match. For example, the DNS name may be <Name1>.<company>.com, and the NetBIOS name may be <Name2>.This problem is caused by the following two conditions:●The domain DNS name and NetBIOS name are not the same. For example, the DNS name is <Name1>.<company>.com and the NetBIOS name is <Name2>).●The client does not have any cached DFS information because the network was connected and initialized. The client will attempt to fill this cache every 15 minutes. To view this cache, use the DFS utility, DFSutil.exe, from the Windows Server 2003 Resource Kit.Cause of ProblemThis problem is caused by the following two conditions:●The domain DNS name and NetBIOS name are not the same. For example, the DNS name is <Name1>.<company>.com and the NetBIOS name is <Name2>).●The client does not have any cached DFS information because the network was connected and initialized. The client will attempt to fill this cache every 15 minutes. To view this cache, use the DFS utility, DFSutil.exe, from the Windows Server 2003 Resource Kit.Resolution of Problem●Make the NetBIOS and DNS name of the server the same.For example, if the DNS name is <Name1>.<company>.com, then make the NetBIOS name <Name1> as well. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download