SQL2014 and Azure xStore Integration - Better Together



SQL Server 2014 and Windows Azure Blob Storage Service: Better Together SQL Server Technical ArticleSummary: SQL Server 2014 introduces significant new features toward a deeper integration with Microsoft Azure, thus unlocking new scenarios and provide more flexibility in the database space for IaaS data models. This technical article will cover in depth SQL Server 2014 Data Files on Azure Blob storage service, starting from step-by-step configuration, then providing guidance on scenarios, benefits and limitations, best practices and lessons learned from early testing and adoption. Additionally, a fully featured example of a new (Windows Server Failover Clustering) - like feature will be introduced to demonstrate the power of Microsoft Azure and SQL Server 2014 when combined together.Writer: Igor Pagliai, Francesco CognoTechnical Reviewer: Silvano Coriani, Francesco Diaz, Pradeep M.M, Luigi Delwiche, Ignacio Alonso PortilloPublished: April 2014 (revision 2, June 2014)Applies to: SQL Server 2014 and Microsoft AzureCopyrightThis document is provided “as-is”. Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. Some examples depicted herein are provided for illustration only and are fictitious.? No real association or connection is intended or should be inferred.This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes. ? 2014 Microsoft. All rights reserved.Contents TOC \o "1-3" \h \z \u PAGEREF _Toc389647486 \h 1Introduction PAGEREF _Toc389647487 \h 51.Architecture PAGEREF _Toc389647488 \h 61.1SQL Server Data Files in Windows Azure: overview PAGEREF _Toc389647489 \h 71.2Tools and interfaces PAGEREF _Toc389647490 \h 81.3Limitations PAGEREF _Toc389647491 \h 91.4Usage scenarios PAGEREF _Toc389647492 \h 101.5Pros and cons PAGEREF _Toc389647493 \h 112.Configuration PAGEREF _Toc389647494 \h 132.1Step-by-step procedure PAGEREF _Toc389647495 \h 132.2Best practices PAGEREF _Toc389647496 \h 202.3Operations PAGEREF _Toc389647497 \h 253.Implementing a failover cluster mechanism PAGEREF _Toc389647498 \h 273.1Introduction PAGEREF _Toc389647499 \h 273.2Shared nothing model in SQL Server Data Files in Windows Azure PAGEREF _Toc389647500 \h 283.3Endpoint mapping PAGEREF _Toc389647501 \h 323.4Polling architecture PAGEREF _Toc389647502 \h 343.5Limitations of this architecture PAGEREF _Toc389647503 \h 353.5.1No metadata sharing PAGEREF _Toc389647504 \h 353.5.2Failover at database level PAGEREF _Toc389647505 \h 353.5.3No availability group PAGEREF _Toc389647506 \h 363.6Implementation PAGEREF _Toc389647507 \h 363.6.1Requirements PAGEREF _Toc389647508 \h 363.6.2Metadata tables PAGEREF _Toc389647509 \h 463.6.3Polling stored procedures PAGEREF _Toc389647510 \h 483.6.4Polling job PAGEREF _Toc389647511 \h 493.7Step-by-step installation example PAGEREF _Toc389647512 \h 494.Monitoring and troubleshooting PAGEREF _Toc389647513 \h 574.1Windows Azure Management Portal PAGEREF _Toc389647514 \h 574.2Windows Azure Storage logs and analytics PAGEREF _Toc389647515 \h 594.3Performance counters PAGEREF _Toc389647516 \h 614.4SQL Server wait types PAGEREF _Toc389647517 \h 634.5SQL Server xEvents trace PAGEREF _Toc389647518 \h 634.6Errors PAGEREF _Toc389647519 \h 665.Conclusion PAGEREF _Toc389647520 \h 67Appendix PAGEREF _Toc389647521 \h 69A.Wait stats analysis PAGEREF _Toc389647522 \h 69B.Enabling Transparent Data Encryption (TDE) PAGEREF _Toc389647523 \h 70C.Downloading Database file blobs using a Valet Key Pattern PAGEREF _Toc389647524 \h 71D.Microsoft SQL Server to Windows Azure helper library PAGEREF _Toc389647525 \h 74E.Installing SQL Management Objects PAGEREF _Toc389647526 \h 94G.Stored procedure reference PAGEREF _Toc389647527 \h 1081)[HAOnAzure].[PlaceDBOnHA] PAGEREF _Toc389647528 \h 1082)[HAOnAzure].[ScanForDBAndMountEndpoints] PAGEREF _Toc389647529 \h 1083)[HAOnAzure].[ScanForDetachedDBsAndAttach] PAGEREF _Toc389647530 \h 1084)[HAOnAzure].[UpsertAzureAccountCredentials] PAGEREF _Toc389647531 \h 1095)[HaOnAzure].[InsertOrUpdateEndpointOnHA] PAGEREF _Toc389647532 \h 1096)[HaOnAzure].[RemoveDBFromHA] PAGEREF _Toc389647533 \h 1117)[HaOnAzure].[RemoveEndpointFromHA] PAGEREF _Toc389647534 \h 1128)[HaOnAzure].[ShowEndpoints] PAGEREF _Toc389647535 \h 1139)[HAOnAzure].[ShowDatabasesOnHA] PAGEREF _Toc389647536 \h 114IntroductionThe goal of this white paper is to provide guidance to developers and IT professionals on how to leverage a new Microsoft SQL Server 2014 feature called SQL Server Data Files in Windows Azure. It explains the architecture, provides step-by-step instructions for configuration, and clarifies limitations and advantages, and then it provides indications on performance, monitoring, and troubleshooting. Finally, it provides a fully functional example of how is possible to create a custom high-availability and disaster-recovery solution, very similar to typical on-premises failover clusters in Windows Server. Note: In the context of this white paper, SQL Server Data Files in Windows Azure is the capability of SQL Server 2014 to host database data and transaction log files directly on Windows Azure blob storage, without using Windows Azure data disks as intermediate storage containers.This white paper has an appendix that contains all the necessary code, stored procedure references, and helper libraries. It also includes a step-by-step installation procedure for a custom cluster mechanism we implemented based on SQL Server Data Files in Windows Azure.This feature, which was initially released in SQL Server 2014, enables native support for SQL Server database files stored as Windows Azure blobs. It enables you to create a database in SQL Server running on-premises or in a virtual machine in Windows Azure with a dedicated storage location for your data in Windows Azure blob storage. This enhancement helps make it easier to move databases between machines through the use of detach and attach operations. In addition, it provides an alternative storage location for your database backup files by allowing you to restore from or to Windows Azure Storage. SQL Server Data Files in Windows Azure enables you to create low-cost, highly available, and elastically scaling hybrid solutions that meet your organization’s requirements for data virtualization, data movement, security, and availability. Architecture SQL Server Data Files in Windows Azure feature is available in all editions of SQL Server 2014; it is enabled by default and provided at no additional cost.This improvement delivers native support for SQL Server database files stored as Windows Azure blobs, making SQL Server 2014 the first truly hybrid-cloud enabled database. SQL Server runs on-premises or in a virtual machine, and it directly interacts with Windows Azure Storage. Now SQL Server instances, both on-premises and in virtual machines, can interact directly with Windows Azure blobs. This improves agility in terms of attaching and moving databases and makes restore possible without data movement. It also opens the way for a highly available solution that uses Windows Azure blob storage as a shared storage between multiple nodes.This is a feature first introduced in SQL Server 2014, for more information about SQL Server integration with Windows Azure Storage, see the following topic in SQL Server Books Online:SQL Server Data Files in Windows Azure(v=sql.120).aspxSQL Server Data Files in Windows Azure: overview SQL Server 2014 integration with Windows Azure blob storage occurs at a deep level, directly into the SQL Server Storage Engine; SQL Server Data Files in Windows Azure is more than a simple adapter mechanism built on top of an existing software layer.The Manager Layer includes a new component called XFCB Credential Manager, which manages the security credentials necessary to access the Windows Azure blob containers and provides the necessary security interface; secrets are maintained encrypted and secured in the SQL Server built-in security repository in the master system database.The File Control Layer contains a new object called XFCB, which is the Windows Azure extension to the file control block (FCB) used to manage I/O against each single SQL Server data or log file on the NTFS file system; it implements all the APIs that are required for I/O against Windows Azure blob storage.At the Storage Layer, the SQL Server I/O Manager is now able to natively generate REST API calls to Windows Azure blob storage with minimal overhead and great efficiency; in addition, this component can generate information about performance counters and extended events (xEvents).Important: The only Windows Azure blob type supported by SQL Server 2014 is page blob; any attempt to use block blob will fail. To understand the difference between block blobs and page blobs, see “Understanding Block Blobs and Page Blobs” in the Windows Azure documentation at path to Windows Azure Storage is different using this new feature. If you use the Windows Azure data disk and then create a database on it, I/O traffic passes through the Virtual Disk Driver on the Windows Azure Host node. However, if you use SQL Server Data Files in Windows Azure, I/O traffic uses the Virtual Network Driver.SQL Server 2014 can run native REST API calls against Windows Azure blobs via the network directly. For more information about SQL Server in Windows Azure Virtual Machines (IaaS) scalability target and performance optimization, see the following white paper:Performance Guidance for SQL Server in Windows Azure Virtual Machines and interfaces SQL Server 2014 comes with a series of tools and interfaces you can use to interact with the Windows Azure blob storage integration:Windows PowerShell cmdlets. New cmdlets in SQL Server 2014 can be used to create a SQL Server database that references a blob storage URL path instead of a file path.Performance counters. A new object called HTTP_STORAGE_OBJECT tracks activity in System Monitor when you are running SQL Server with Windows Azure Storage data or log files.SQL Server Management Studio. URL paths can be used in all the database-related dialog boxes for creating, attaching, restoring, and reviewing database properties; you can also use Object Explorer to browse and connect directly to Windows Azure blob storage accounts.SQL Server Management Objects. You can specify URLs to data or log file paths in Windows Azure Storage in SQL Server Management Objects (SMO); additionally, a new property, HasFileInCloud, in the object Microsoft.SqlServer.Management.Smo.Database returns true if the database has at least one file stored in Windows Azure blob storage.Transact-SQL. Transact-SQL syntax now supports usage of URL paths in database creation and file layout management. Also, a new column called credential_id in the sys.master_files system view enables you to create joins with the sys.credentials system view and retrieve details about Windows Azure blob storage access policies and containers.Limitations Windows Azure blob storage integration in SQL Server 2014 comes with some limitations, mainly due to the nature of Windows Azure blob storage:Maximum file size and number of files. The maximum size of a single page blob in Windows Azure storage is 1 terabyte (TB). You can create an infinite number of blobs (and containers), but the maximum total size per storage account is 200 TB.Geo-replication. The Windows Azure blob storage geo-replication feature is not supported and must be disabled: because geo-replication is asynchronous and there is no way to guarantee write ordering between data and log files, using this feature can generate database corruption if database failover occurs.Filestream. Filestream is not supported on Windows Azure blob storage. Only MDF, LDF, and NDF files can be stored in Windows Azure blob storage using SQL Server Data Files in Windows Azure.Hekaton in-memory database. Additional files used by the In-memory Hekaton feature are not supported on Windows Azure blob storage. This is because Hekaton uses Filestream.AlwaysOn. You can use AlwaysOn if you don’t need to create additional files on the primary instance. For more information about AlwaysOn support in Windows Azure Virtual Machines (IaaS), see the following blog post:SQL Server 2012 AlwaysOn Availability Group and Listener in Azure VMs: Notes, Details and Recommendations Usage scenarios Although it is theoretically possible and officially supported, using an on-premises SQL Server 2014 installation and database files in Windows Azure blob storage is not recommended due to high network latency, which would hurt performance; for this reason, the main target scenario for this white paper is SQL Server 2014 installed in Windows Azure Virtual Machines (IaaS). This scenario provides immediate benefits for performance, data movement and portability, data virtualization, high availability and disaster recovery, and scalability limits. “Implementing a failover cluster mechanism,” later in this document, also describes a creative example of a new low-cost mechanism that provides high availability and disaster recovery functionalities similar to typical on-premises failover clusters in Windows Server. These are the biggest advantages of using SQL Server data files on Windows Azure blob storage service:Portability: in this form (that is, with database files as blobs in Windows Azure Storage), it’s easy to detach a database from a Windows Azure Virtual Machine (IaaS) and attach the database to a different virtual machine in the cloud; this feature might be also suitable to implement an efficient disaster recovery mechanism, because everything is directly accessible in Windows Azure blob storage. To migrate or move databases, use a single CREATE DATABASE Transact-SQL query that refers to the blob locations, with no restrictions on storage account and compute resource location. You can also use rolling upgrade scenarios if extended downtime for operating system and/or SQL Server maintenance is required.Database virtualization: When combined with the contained database feature in SQL Server 2012 and SQL Server 2014, a database can now be a self-contained data repository for each tenant and then dynamically moved to different virtual machines for workload rebalancing. For more information about contained databases, see the following topic in SQL Server Books Online:Contained Databases High availability and disaster recovery: Because all database files are now externally hosted, even if a virtual machine crashes, you can attach these files from another hot standby virtual machine, ready to take the processing workload. A practical example of this mechanism is provided in “Implementing a failover cluster mechanism” later in this document. Scalability: Using SQL Server Data Files in Windows Azure, you can bypass the limitation on the maximum number of Windows Azure disks you can mount on a single virtual machine (16 for XL and A7 sizes); there is a limitation on the maximum number of I/O per second (IOPS) for each single Windows Azure disk. The next section discusses this limitation in greater detail. Theoretically, you should be able to mount up to 32,767 database files on each SQL Server instance, but it’s very important to consider the network bandwidth required to support your workload. You should also carefully review “Pros and cons” later in this document. For more information about Windows Azure virtual machine sizes and characteristics, limitations, and performances, see the following links:Virtual Machine and Cloud Service Sizes for Windows Azure Performance Guidance for SQL Server in Windows Azure Virtual Machines: Mixing SQL Server Data Files on premise and in Windows Azure you can expand your SQL Server instance storage space as needed. Even better, you pay only for the Azure blob pages actually in use, not the Azure blob pages allocated (note that you still pay for pages in use for the Azure blob perspective but unused as far as SQL Server is concerned). You can even mix the different types of storage based on cost, performance and availability considerations either manually or using automatic data placement features such as partitioning functions along with a partition schemes.CREATE PARTITION SCHEME (Transact-SQL) Pros and cons SQL Server Data Files in Windows Azure provides several advantages and enables new usage scenarios, but as with any SQL Server feature, it comes with pros and cons that must be carefully evaluated. The most important consideration is related to the consumption of network: if you use this new feature, I/O traffic now counts toward network bandwidth and not the IOPS limit per single Windows Azure disk. The following table lists the main resource limits and thresholds for each virtual machine size.Important: Values reported in Allocated Bandwidth (Mbps) are not officially covered or guaranteed by any Service Level Agreement (SLA) today. Information is reported only as a general estimation of what is expected for each virtual machine size. We hope in future to enable more fine-grained control over network utilization and then to be able to provide an official SLA.These are the main pros to consider:It is possible to scale on the number of IOPS, on database data files, far beyond the (500 IOPS x 16 disks) = 8000 IOPS limit imposed by usage of Windows Azure additional disks.Databases can be easily detached and migrated or moved to different virtual machines (that is, portability).Standby SQL Server virtual machines can be easily used for fast disaster recovery.You can easily implement custom failover mechanisms. “Implementing a failover cluster mechanism” (later in this document) demonstrates how to do this.This mechanism is almost orthogonal to all main SQL Server 2014 features (such as AlwaysOn and backup and restore) and it is pretty well integrated into the SQL Server management tools like SQL Server Management Studio, SMO, and SQL Server PowerShell.You can have a fully encrypted database with decryption only occurring on compute instance but not in a storage instance. In other words, using this new enhancement, you can encrypt all data in public cloud using Transparent Data Encryption (TDE) certificates, which are physically separated from the data. The TDE keys can be stored in the master database, which is stored locally in your physically secure on-premises machine and backed up locally. You can use these local keys to encrypt the data, which resides in Windows Azure Storage. If your cloud storage account credentials are stolen, your data still stays secure because the TDE certificates always reside on-premises.These are the main cons to consider:I/O generated against single database files, using SQL Server Data Files in Windows Azure, count against network bandwidth allocated to the virtual machine and pass through the same single network interface card used to receive client/application traffic.It is not possible to scale on the number of IOPS on the single database transaction log file. Instead, using Windows Azure data disks, you can use the Windows Server 2012 Storage Spaces feature and then stripe up to four disks to improve performance. For more information about this technique, see the following white paper.Performance Guidance for SQL Server in Windows Azure Virtual Machines Geo-replication for database file blobs is not supported.Note: Even if you are using traditional Windows Azure data disks, geo-replication is not supported if more than one single disk is used to contain all data files and transaction log file together.There are some limitations, as mentioned earlier in this document in “Limitations.” For example, the Hekaton in-memory database and Filestream features are not supported.ConfigurationThis section provides you with a complete step-by-step procedure that creates all the necessary objects and required configuration items to implement a fully functional scenario.Step-by-step procedureFirst of all, create a Windows Azure storage account, if one is not yet available, and then create a container with Private security access.Important: For performance reasons, we strongly recommend that you create the storage account in the same data center as the virtual machine where you will install SQL Server 2014, or, even better, use the same Windows Azure affinity group.Because you will use at least two blobs (one for the data file and one for the log file), you should disable geo-replication. This configuration is not supported for SQL Server. By disabling geo-replication, you can save around 30 percent on your storage costs. We have been asked several times by customers and partners why we recommend disabling geo-replication, and the reason is simple: because there is no ordering guaranteed in asynchronous storage replication, writes to the data file blob may occur before writes to the log file blob, which violates a fundamental principle and requirement of every transactional database system: Write-Ahead Logging (WAL), which you can read about in the following old but still valid KB article:SQL Server 7.0, SQL Server 2000, and SQL Server 2005 logging and data storage algorithms extend data reliability Next, create a container for your database with an access level of Private. The names of the storage account and the container must be lowercase.Note that this container has a full path equal to “”. This is important to remember when you use SQL Server Management Studio later in this paper. Even if the same container can be used by multiple databases, you should create only one database per container because the security mechanism that is used in this procedure is based on the container itself.Next, create a policy and a shared access signature (SAS) to protect and secure the container. SQL Server requires this policy in order to work correctly with Windows Azure storage. There are several ways to create the policy and the SAS, You can use C# code for this simple task or use complex REST APIs, but this procedure uses a free tool, Azure Storage Explorer, which you can find on CodePlex: Install the tool, open it, and then insert the name and the access key of your storage account. If the operation succeeds, the content of your storage account is displayed, along with the list of your containers. Select the one you want to place your database in, and then click Security.In the Blob & Container Security dialog box, click the Shared Access Policies tab, and then create a new policy with the settings shown in the following screen shot. Click Save Policies. In some documentation you can find on the Internet, Delete is not listed as a privilege requirement, but if you want to be able to drop the database from SQL Server, you need to include Delete in the policy definition. We recommend setting a fixed limit for the expiry time setting (such as one year): although it is technically possible to have an infinite lease, it’s a good practice to use a fixed limit and then renew periodically. For more information about blob lease times and expirations, see the following blog post:New Blob Lease Features: Infinite Leases, Smaller Lease Times, and More For more information about shared access and blob storage and containers, see the following:Shared Access Signatures, Part 1: Understanding the SAS Model Close the dialog box and then reopen it; switching directly to the Shared Access Signatures tab for the next step will now let you see the newly created policy. After you reopen the dialog box, click the Shared Access Signatures tab and then in Container name type sqldatacontainer, and in Policy, select sqlpolicy. When you are done, click Generate Signature. A string appears in the Signature box.Click Copy to Clipboard and then paste the string in a secure location for later reuse. Finally, close the tool.Now you can test integration with SQL Server Data Files in Windows Azure. Open SQL Server Management Studio inside the Windows Azure virtual machine you created, and then create the SQL Server credential object that you need to access the Windows Azure blob container you created before. A SQL Server credential object stores authentication information required to connect to a resource outside of SQL Server. The credential stores the URI path of the storage container and the SAS key values. For each storage container used by a data or log file, you must create a SQL Server credential whose name matches the container path. That is, the credential name string must be exactly the same as the blob container path; otherwise the security information in SQL Server and Windows Azure do not match, and authentication fails. For example, with the storage container created earlier in this paper, the correct full path to use, including HTTPS prefix, is . To create the corresponding credential object in SQL Server, use the following code in SQL Server Management Studio, enclosing it in square brackets:CREATE CREDENTIAL [] WITH IDENTITY='SHARED ACCESS SIGNATURE', SECRET = 'sr=c&si=sqlpolicy&sig=9aoywKCSbX4uQrIGEWIl%2Bfh3cMtEm5ZA3fSDxh2wskajd7' Note: The signature in the example is for demonstration only. It is not valid.To create the object without using Transact-SQL, use the Credentials node in SQL Server Management Studio: right-click the instance-level Security container, and then click New. In Credential name, enter the URI, and then in Identity, enter SHARED_ACCESS_SIGNATURE.In Password, enter the SECRET value that you copied in the Azure Storage Tool earlier. You will need to modify the string to get the correct value to enter. For example, the string you copied from Azure Storage Explorer is something like this: However, the value you need to provide to SQL Server is only the substring starting from [?] until the end:sr=c&si=sqlpolicy&sig=9aoywKCSbX4uQrIGEWIl%2Bfh3cMtEm5ZA3fSDxh2wskajd7After the credential object is created, the credential object is saved in the SQL Server master database in an encrypted form. After the object is stored, you do not need to maintain the object itself further, but be sure to back up the master database regularly.You can also register and connect to blob storage accounts in Windows Azure through SQL Server Management Studio starting with SQL Server 2012. When you click Azure Storage, the following dialog box appears. As with the other connection methods discussed here, the shared access key is stored securely in the SQL Server master database.As of this writing, you can only list, see, and in some cases delete the containers and files for storage accounts: we hope to add enhanced signature tools in future service packs so that you can create the connection without having to use external tools or code. If you try to delete a file that is in use, you receive the following error.After the connection between Windows Azure and SQL Server is established, you can create your first database using the following Transact-SQL command.CREATE DATABASE TestDBonAzureON (NAME = file_data1, FILENAME = '', SIZE = 10GB), (NAME = file_data2, FILENAME = '', SIZE = 10GB), (NAME = file_data3, FILENAME = '', SIZE = 10GB) LOG ON (NAME = file_log1, FILENAME = '', SIZE = 1GB) Note: In our testing, this command took 7 seconds to execute. The creation of 100 GB database data file with a 10 GB transaction log file completed in 17 seconds.After you run the Transact-SQL command, the new items appear in SQL Server Management Studio.It is worth mentioning that SQL Server uses temporary leases to reserve blobs for storage, and that each blob lease is renewed every 45 to 60 seconds. If a server crashes and another instance of SQL Server configured to use the same blobs is started, the new instance waits up to 60 seconds for the existing lease on the blob to expire. To remove or break the lease manually for emergency reasons or troubleshooting, you can use the Lease Blob REST API or the following GUI tool:Azure Blob Lease Best practicesDepending on why you are interested in SQL Server Data Files in Windows Azure (performance, portability, security, scalability, or high availability and disaster-recovery), different sets of best practices may apply for specific scenarios. When you use this feature, it’s critical to remember the following points:If SQL Server Data Files in Windows Azure is used for certain SQL Server database files, IOPS against those files count toward the network bandwidth and limits assigned to your virtual machine.Today, Windows Azure does not guarantee with an SLA the network bandwidth for a specific virtual machine size, but we provide a general estimation in “Pros and cons” earlier in this document.If you are using Windows Azure data disks for these database files, IOPS against those files counts toward the 500 IOPS limit per single disk, up to 16 disks, depending on the Windows Azure virtual machine size.500 IOPS for each blob/data disk is a hard limit enforced through throttling, but it is not a performance guarantee for each Windows Azure data disk. For more information about performance with Windows Azure data disks, see the following white paper:Performance Guidance for SQL Server in Windows Azure Virtual Machines There are many important things to consider when you plan a system that includes Windows Azure. The following list contains information about network and storage performance targets, limits, and thresholds that you should bear in mind: SecurityUse a separate Windows Azure blob storage container for each database and create a separate CREDENTIAL object, in SQL Server, for each container.Create Windows Azure blob storage containers with an access level of Private.Because Windows Azure storage does not provide or store blob data encrypted, if you need encryption at rest, enable SQL Server transparent database encryption (TDE). The appendix of this paper includes a sample script to enable TDE. For more information about TDE, which is fully supported in this scenario, see the following topic in SQL Server Books Online:Transparent Data Encryption (TDE) you create an SAS policy, use a short-term expiration date and be sure to schedule appropriate SQL Server credential object updates to avoid blocked access. For more information about containers, policies, and SAS, see the following blog post:About Container Shared Access Signature for SQL Server XI All general SQL Server concepts, technologies, and features related to security, like auditing, still apply and should be adopted.To help ensure the highest portability and self-containment for databases, use the contained database feature in SQL Server 2014. For more information about working with contained databases, see the following topic in SQL Server Books Online:Contained Databases PerformanceBecause each Windows Azure storage account is able to provide up to 20,000 IOPS and 200 TB space for all the types (that is, blobs, tables, and queues) of accessed objects, you should carefully monitor the total IOPS amount for all database files and ensure that you will not be throttled. For blob objects, you can expect up to 60 MB per second, or up to 500 transactions per second.If your database requires higher IOPS than a single blob or file can sustain (that is, >>500 IOPS), you should not place it directly on Windows Azure blob storage using SQL Server Data Files in Windows Azure.Until Windows Azure Storage allows you to scale up on I/O performances, the only possibility is to place the transaction log on a volume backed up by Windows Server 2012 Storage Spaces in stripe mode, created inside the virtual machine. Adopting this approach, you will have to create your data files on Windows Azure blob storage using SQL Server Data Files in Windows Azure, leaving the transaction log inside the virtual machine on a traditional Windows Azure data disk. Here is an example of how to do this using Transact-SQL.CREATE DATABASE TestDBonAzureON (NAME = file_data1, FILENAME = '', SIZE = 10GB)LOG ON (NAME = file_log1, FILENAME = 'Z:\DATA\file_log1.mdf', SIZE = 1GB) For more information about Windows Server 2012 Storage Spaces and I/O scalability in Windows Azure virtual machines, see the following white paper:Performance Guidance for SQL Server in Windows Azure Virtual Machines do not recommend the use of SQL Server Data Files in Windows Azure for the master, model, and msdb system databases: to ensure virtual machine isolation and self-containment, store these system databases on system drive C.We do not recommend the use of SQL Server Data Files in Windows Azure for the tempdb system database for several reasons:Because tempdb is a temporary database, there is no need to port outside the assigned SQL Server instance.Because using SQL Server Data Files in Windows Azure counts toward your assigned network bandwidth and threshold, you can save network bandwidth and use Windows Azure data disks instead.You should place the SQL Server virtual machine in the same Windows Azure affinity group as the storage account that you plan to use for SQL Server Data Files in Windows Azure.For each database, use multiple data files (blobs) until you reach the required number of IOPS: because SQL Server can use data files in parallel very efficiently, it’s very easy to scale up on storage performance, at least for data files.Be sure to check your network bandwidth consumption when using multiple data files or blobs with SQL Server Data Files in Windows Azure.Note: Only one transaction log file can be active per database. If more than one transaction log file is active per database, SQL Server Data Files in Windows Azure cannot be used.Even though large blob size allocation is very efficient in Windows Azure Storage (NTFS sparse file mechanism), you should allocate, at creation time, the maximum expected database file size for SQL Server data files and avoid lengthy file expansion operations.For transaction log files, SQL Server requires the entire file content or blob to be zeroed on the storage system. This is not required for azure page blob based transaction log files: the only initialized part is the log tail witch is fixed in size and this operation is a single, atomic REST call. So regardless of the dimension of your transaction log the required amount of time to initialize it is immutable. For this reason the only constraint to evaluate during transaction log creation is its cost. Keep in mind that the same consideration about VLF size and cardinality apply in blob based transaction logs as well.Use the SQL Server data compression feature whenever possible to reduce I/O against Windows Azure blob storage. For more information about data compression, see the following topic in SQL Server Books Online:Data Compression using the SQL Server backup compression feature as well, to reduce data size transfer to Windows Azure blob storage during database backup and restore operations. For more information about backup compression, see the following topic in SQL Server Books Online:Backup Compression (SQL Server) SQL Server shrink operations on database files, because these operations are extremely slow.CostBecause it is not supported, you should disable Windows Azure storage account geo-replication. Disabling this feature can save you 30 percent on usage billing.If Windows Azure storage monitoring and logging are required, be sure to use minimal detail levels and short information retention policies.If you use a second SQL Server virtual machine for hot standby and you require very quick database failover, leave the virtual machine running, but use the lowest possible Windows Azure virtual machine size: when you need the virtual machine to become active, change the virtual machine size back to the desired state. This simply triggers a virtual machine restart.If you use a second SQL Server virtual machine for cold standby and you do not require very quick database failover, after the virtual machine is configured, shut it down and deallocate it, because you will not pay for stopped machines, only for the space occupied by virtual hard disks. Be aware that restarting the virtual machine takes a few minutes.Carefully plan database file size for all your files, because you pay for each allocated gigabyte: even if the storage cost is minimal compared to compute resource usage, you may want to review the amount of space used to ensure cost efficiency.Avoid unnecessary index maintenance: use adaptive logic in your script to ensure you rebuild or reorganize indexes only if necessary.Monitoring and managementUsing the Windows Azure Management Portal, add additional key metrics to monitor performance and availability of the Windows Azure storage account (see “Windows Azure Management Portal” later in this document).Using the Management Portal, set up alerts on storage metrics per KPI like I/O requests, availability, throttling, and errors (see “Windows Azure Management Portal” later in this document).Back up SQL Server databases directly on Windows Azure blob storage using a new feature introduced in SQL Server 2012 SP1 – CU2. This approach avoids the use of Windows Azure data disk, instead permitting remote backup to other Windows Azure data centers and helping to ensure maximum portability of backup data. For more information about this feature, see the following:Tutorial: Getting Started with SQL Server Backup and Restore to Windows Azure Blob Storage Service SQL Server Database Backup in Azure IaaS: performance tests So that you always have a general overview of Windows Azure storage account health and performances, enable Windows Azure storage analytics and metrics collection, at the minimum verbose level (see “Windows Azure storage logs and analytics” later in this document). Be sure to set a reminder before SAS expiration to update the CREDENTIAL object in SQL Server; otherwise access will be denied by the Windows Azure storage infrastructure.Be sure to save all the scripts for database and credentials creation in a secure place outside the main SQL Server virtual machine.Periodically review SQL Server 2014 performance counters related to SQL Server Data Files in Windows Azure (see “Performance counters” later in this document) and wait statistics (see “SQL Server wait types” later in this document).High availability and disaster recoveryThe official supported and recommended mechanism for high availability for SQL Server 2014 (and SQL Server 2012) in Windows Azure Virtual Machines (IaaS) is AlwaysOn Availability Groups. For more information about AlwaysOn Availability Groups and SQL Server IaaS, see the following blog post:SQL Server 2012 AlwaysOn Availability Group and Listener in Azure VMs: Notes, Details and Recommendations SQL Server Data Files in Windows Azure also makes it possible to implement different custom high availability mechanisms. A fully functional example is provided “Implementing a failover cluster mechanism” later in this document.If possible, include SQL Server virtual machines and all data and log files in the same storage account. The storage account is the failover unit, if a disaster occurs at the level of the Windows Azure data center or storage rack.Save the database creation script outside the main SQL Server virtual machine for fast database recovery in case of a virtual machine crash: this script contains the full HTTPS paths to Windows Azure blob storage necessary to reattach the database to a new virtual machine.Be sure also to save the SQL Server credential creation script for SAS policy. SQL Server needs this script to mount blobs and files. Keep this file in a secure place, because it contains security information.For more information about Windows Azure storage performance targets, limits and thresholds, see the following blog post:Azure Storage Scalability and Performance Targets OperationsNow that you have your SQL Server instance in a Windows Azure virtual machine and your database is directly on Windows Azure blob storage using SQL Server Data Files in Windows Azure, consider the following items related to database maintenance, management, and life cycle:If you drop the database, the blob files are removed.If you detach the database, SQL Server releases the lease on the Windows Azure blob files immediately. The following screen shot of the Windows Azure Management Portal shows the blob properties after the database is detached.If you take the database offline, as with detaching, SQL Server releases the lease on the Windows Azure blob files immediately.As long as SQL Server maintains a lease on the blob files, you cannot externally drop or modify or mount files in another SQL Server instance. If you need to drop or modify such files, you must detach the database or stop the SQL Server service.If you need to programmatically break the lease on database files, you can do so if you incorporate the necessary logic inside SQL Server itself. For more information about how to break the lease programmatically, see the following blog post:Blob leases: how to break them from SQL Server(using SQL CLR and Windows Azure REST API) From a secondary SQL Server instance, if you try to attach database files that are already being used by a primary SQL Server instance, you receive an error similar to this one.Msg 5120, Level 16, State 140, Line 5Unable to open the physical file "". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)".This error appears because a lease already exists at the Windows Azure blob storage level, and only one instance at a time can have access.If you need to attach a previously detached database, even on a different SQL Server instance or virtual machine, use the CREATE DATABASE Transact-SQL command with the FOR ATTACH switch as shown here.CREATE DATABASE [TestDBonAzure] ON PRIMARY (NAME = N'file_data1', FILENAME = N'', SIZE = 10485760KB, MAXSIZE = 1TB, FILEGROWTH = 1024KB) LOG ON (NAME = N'file_log1', FILENAME = N'', SIZE = 1048576KB, MAXSIZE = 1TB, FILEGROWTH = 512MB)FOR ATTACHImplementing a failover cluster mechanism This section presents a complete custom solution that provides functionalities similar to those of on-premises Windows Server failover clusters.IntroductionOne of the biggest challenges in any IT department is to offer a highly available service, regardless of its nature. This holds true for Microsoft SQL Server. While on-premises SQL Server installations enable you to make use of the Windows Server failover cluster technology, as of this writing this solution is not supported in Windows Azure Virtual Machines (IaaS). SQL Server 2014, however, with SQL Server Data Files in Windows Azure, enables you to access shared storage directly. With the required connectivity and security permissions, each blob can be accessed by any SQL Server installation—both on premises and in the cloud. Windows Azure virtual machines have out-of-the box Internet connectivity, including the Windows Azure Storage URI—even across data centers. The preceding figure shows that every SQL Server instance that has access to those blobs can attach the database. You can use this architecture to help protect against software failures. ArchitectureProtection levelDifferent instancesSQL Server instance failureDifferent virtual machinesInstance failureDifferent data centersData center failure*Note: Precautions that help protect you from data center failure do not protect you from blob storage failure, because the blob geo-replication feature is not supported for Microsoft SQL Server database files. However, you can use geo-replication for database backups. The Windows Azure platform, in addition, supports multiple endpoints linked specifically to a virtual machine (stand-alone endpoints) or shared between all virtual machine in the same cloud service (load-balanced endpoints). In the context of this white paper, we’ll use and refer to stand-alone endpoints: these endpoints are mapped to a specific port couple, local and remote, and—at a given time—will point to a specific virtual machine in the deployment. This configuration is discussed in detail in “Endpoint mapping” later in this document By migrating these endpoints from one virtual machine to another, an endpoint failover can be simulated, achieving the same result as an AlwaysOn Availability Group Listener failover.For more information, see the following topics on MSDN:List Cloud Services ().Get Cloud Service Properties ().Shared nothing model in SQL Server Data Files in Windows AzureThe previous section showed that every Windows Azure virtual machine can access the blobs that compose the database. In order to prevent simultaneous access to the same blob by two or more SQL Server instances, SQL Server must implement a mechanism similar to the shared nothing model in Windows Server failover clusters. For more information about the shared nothing model in failover clusters, see “Cluster Strategy: High Availability and Scalability with Industry-Standard Hardware” at . In the Windows Azure Storage platform SQL Server uses the concept of lease. Each container and blob can have a lease active on it. While leased, a container or blob can be only accessed via a valid lease ID. Because SQL Server will not share the lease ID, it is guaranteed to be the only one that can access the blob (or container) at a given time. There are two types of leases: time based and never expiring (often referred as fixed). SQL Server 2014 uses time based leases, so in case of problems, the blobs eventually become available again on their own. The use of time based leases also means that SQL Server must renew a lease before its expiration. Here is a simplified sequence diagram.Note that SQL Server releases the lease if you detach the database manually or if you shut down SQL Server gracefully. While the lease is active, no REST API request is allowed unless it specifies the lease ID. For example, the Put Page operation—used by SQL Server to update a database page—is refused unless a valid lease ID is specified. For more information about REST API requests, see “Put Page (REST API)” (). You can see the lease status of a blob by clicking Edit in the Windows Azure Management Portal. Because, by definition, a single database file blob can be leased by a single SQL Server instance at any given time, you can use the lease as the guarantee required to replicate the shared nothing model. A database can be composed of many blobs. Each blob maintains an independent lease. It is impossible, however, to attach a different file of the same databases in different instances, because an instance must have access to the PRIMARY filegroup in order to bring the database online. Because there is only a single PRIMARY filegroup in a given database, it is not possible to attach the same database in two different instances. The remainder of the section refers to the Microsoft SQL Server to Windows Azure Helper Library, which is an easy-to-use tool that is available for free on CodePlex: . The concepts discussed here can be applied regardless of what kind of technology you use. For more information about how to install the library on your instance, see the appendix. Using the library, call the List Blobs REST API method () and retrieve the current lease status of a given blob.Use the sys.master_files catalog view () to identify the SQL Server databases attached to an instance using SQL Server Data Files in Windows Azure feature. Because the blobs are in use, you can confirm the existence of a valid lease.Note: These examples use the MERGE join hint to minimize the number of REST calls. The purpose of this hint is performance-related and therefore out of scope of this white paper. If you detach the database, or if a failure that causes the instance to be unable to keep the lease active occurs, the blobs become available.In this situation any SQL Server instance that can connect to Windows Azure storage can also attach the database (given the proper credentials). Endpoint mappingWhenever you create a virtual machine, you are given the option to add it to an existing cloud service. In general, this enables you to horizontally scale to achieve the required performance. In order to exploit cloud horizontal scaling, every node must be homogenous. Windows Azure balances the connections through the available nodes. While this feature is essential for Web Sites, and in general for stateless services, SQL Server does not support a scale-out architecture. (SQL Server Reporting Services supports this configuration; however, it is not discussed here. For more information about using Reporting Services in a scale-out architecture, see “Configure a Native Mode Report Server Scale-Out Deployment” at .)Another less common approach is to statically bind a cloud port to a specific node instance. This creates an external port-local port mapping that allows you to reach the internal port that connects to the public external port. This process is often called port forwarding. In this configuration you can forward one (or more) SQL Server TCP ports to static Windows Azure cloud ports. You can also map more external ports to the same internal port. This kind of decoupling enables you to simulate different endpoints while in fact using only one. The Windows Azure port forwarding can be changed using the Management REST API. Remapping the endpoint to a different SQL Server instance disrupts the existing socket connection. This disruption is similar to what happens during a network address resource failover phase in failover clusters. SQL Server TDS protocol forces a new authentication when endpoints are remapped. This authentication should not be a problem because your applications should already be able to handle transient errors gracefully: this best practice holds true for this framework too. In this framework, the examples in this paper work at the database level—each database can fail over on its own. An endpoint needs to be exposed for each clustered database. The framework helps ensure that—at a given time—the endpoint will be mapped on the instance that owns the related database.The preceding figure shows how a single cloud service (that holds the virtual IP) allows you to map remote ports to ports in the virtual machines. Note that you can have more cloud service ports besides the SQL TCP endpoints. Those ports can be mapped to other services (for example, Remote Desktop). There is no need to use multiple TCP/IP ports in SQL Server. You can reuse the same local port and map it to multiple remote ports. This configuration is easier to maintain from the SQL Server perspective; it is also more secure because it reduces the attack surface area. Polling architectureIn the failover cluster architecture the Windows Cluster service handles intra-node communication and the failure detection mechanism. This is achieved, among other ways, by heartbeat communication between the nodes. In the model discussed here there is no external service that queries the status of the other instances. This model relies instead on the lease status of the database’s blobs. That means that each instance that participates in this topology should query the database blob lease status at regular intervals. This process is often referred as polling. As soon as an instance detects an unleased database (in other words, with its blobs unlocked) it should try to attach it. The database downtime is in this case directly influenced by the polling frequency. In this case, the expected downtime in case of a failure can be expressed in this way:τ=L+P2+RWhere:τ is the average down time.L is the SQL Server lease time (60 seconds).P is the polling interval.R is the recovery time of the database. In this architecture you can express P directly and—with less precision—you can influence R by specifying the recovery interval configuration option (for more information about this option, see ). For example, if you poll the blobs every 120 seconds and the recovery time is 15 seconds, you can expect the database to be back online after an average of 1 minute and 45 seconds. If you add the endpoint to the equation, it looks like this:τ=L+P2+R+EWhere:E is the time needed to move and endpoint from the old owner to the new one. This time is in general less than 30 seconds, but because the Windows Azure platform is constantly improving, you should perform your own testing.Limitations of this architectureNo metadata sharingWhile this architecture allows you to achieve independence between the database and the holding instance—as far as the partial containment allows—there are some limitations. The most notable is the need to maintain a shared repository in the instances so that you know which blobs to watch and monitor. This can be easily achieved through a simple backup and restore operation (using Windows Azure Storage as the backup destination makes this even easier). As the number of instances that participate in this architecture grows, however, the alignment task becomes more demanding. This is somewhat balanced by how easy is to add another instance to the architecture. All you have to do is to copy metadata database and start the polling jobs. Another approach—the one taken by the framework explained in the appendix—is to use the Table service in Windows Azure Storage. Those tables are, by definition, shared among every instance and therefore are ideal candidates for shared metadata. In Windows Azure, tables do not support distributed transactions: each call is atomic on its own. For the purpose of the metadata sharing this is not an issue, because the examples in this paper access those tables in read-only mode most of the time and in write-mode only through user interaction. A complete discussion of Windows Azure tables is beyond the scope of this white paper. For more information about Windows Azure tables, see “Windows Azure Table Storage and Windows Azure SQL Database - Compared and Contrasted" ().Failover at database levelThe other limitations are directly related to the partial database containment. You still have to align noncontained entities manually. You can use the sys.dm_db_uncontained_entities dynamic management view () to help locate those objects. This task is commonly done when you use AlwaysOn Availability Groups or SQL Server Mirroring. For more information about how to perform those tasks, see SQL Server Books Online ((v=sql.120).aspx).No availability groupThere is no communication between the nodes; each node attempts to attach a database as soon as it notices its blobs are unlocked. In this scenario there can be no guarantee that a group of databases would end up attached to the same instance. For this reason you should consider mapping an endpoint to a single database. ImplementationThe following sections cover in detail all the requirements for a fully functional solution, along with descriptions of all of the configuration objects that are necessary for the implementation.RequirementsIn order to implement this mechanism you must have:At least two Windows Azure virtual machines in the same cloud service with SQL Server 2014 installed.A TCP stand-alone endpoint available in Windows Azure for each database to be clustered.The master key for each Windows Azure storage account you want to use.The Windows Azure management certificate including its private key.A valid SAS for each database blob container.Important: Although you can use multiple Windows Azure storage accounts for multiple databases, we highly recommend that you place all files that are related to a single database into the same account; this is to avoid potential situations where a storage accounts could fail over independently to different Windows Azure data centers. Virtual machines in the same Windows Azure cloud serviceIn order to support endpoint mapping, every SQL Server instance—or better, its virtual machine—must share the same Windows Azure cloud service. This can be done by selecting the appropriate cloud service during virtual machine instantiation.TCP endpointAlthough Windows Azure endpoint management is handled by the solution’s logic, you must open a TCP listener in SQL Server manually. You can map multiple external ports to the same local port, which is the solution we recommend, but you can also add more TCP endpoints to SQL Server directly. There are many ways to do this. One is to open the SQL Server configuration manager and explore the network configuration node of your instance. From there you have to double click the TCP/IP section.You can add as many TCP ports as you want, separating them with commas.Restart the instance to activate your changes. Also note that if you specify a port in TCP Port SQL Server will not start if that port is unavailable. Using TCP Dynamic Ports would fix this, but it is not compatible with this solution because it requires a fixed port binding between SQL Server and Windows Azure.Windows Azure storage shared keyThe Windows Azure storage account master key is required by the Windows Azure storage REST API in order to sign the requests. Each key is tied to a specific storage account. Keep in mind that you can have more storage account per Windows Azure subscription.To find the shared keys, use the Storage section of the Windows Azure Management Portal ():Click Storage, and then click Manage Accessed Keys.There are always two shared keys. You can use either one.Windows Azure management certificateThe endpoint management REST API calls require you to add a valid management certificate to be successful. These certificates can be obtained in various ways. For testing purposes you can use the makecert and pvx2pfx tools.For example, create a certificate azuretest.cer.makecert -r -sv C:\temp\azuretest.pvk -n "CN=Azure Self signed certificate" C:\temp\azuretest.cerAnd then provide file locations on the local user computer and passwords. pvk2pfx -pvk C:\temp\azuretest.pvk -pi <Password> -spc C:\temp\azuretest.cer -po <Password> -pfx C:\temp\azuretest.pfxYou will end up with three files:AzureTest.cer – This is the public key only certificate.AzureTest.pvk – This is the certificate private key.AzureTest.pfx – This is the certificate with both the private key and the public key.For more information, see the following topics on MSDN: Makecert.exe (Certificate Creation Tool) ((v=vs.110).aspx)Pvk2Pfx ()You have to upload the public key only certificate to Windows Azure. You can use the Windows Azure Management Portal.You will need to import both the certificates with private and public keys in every SQL Server instance that will form the cluster. You must install it into the “My” store of the SQL Server engine execution account. For that task you can either use the Microsoft Management Console (MMC) or the Windows Server 2012 cmdlet Import-Certificate. For more information, see the following topics on TechNet:Import a Certificate ().Import-Certificate ().As best practice, use certificates for this specific purpose. You should secure the certificates very carefully, because anyone who gains access to them has complete control over your Windows Azure account. We strongly recommend that mark the private key as nonexportable while importing them into the certificate stores, and never to store the .pfx file on the server itself.Valid SAS for the database blob containersSQL Server Data Files in Windows Azure requires you to specify a valid shared access signature (SAS) in order to access its blobs without having to authenticate every request. This is a very important performance feature. There are two types of SASs: direct and policy based. Direct SASs sign the resource with the shared access key itself. This means that the only way to invalidate the SAS is to regenerate the shared access key. If you want to invalidate a direct SAS, you must invalidate all of them. For this reason alone, you should use the policy based SASs, even though they are more complex. You can still use direct SASs if you want to. For more information about direct SASs and how to generate them from SQL Server, see the following blog post: About Container Shared Access Signature for SQL Server Data Files in Windows Azure ()For more information about SAS, see the following:Create and Use a Shared Access Signature ()Introducing Table SAS (Shared Access Signature), Queue SAS and update to Blob SAS ()Policy based SASs require you to create the container policy first. The advantage of a policy based SAS is that you can invalidate the SAS by invalidating the policy alone. This makes SAS control more granular. You can use various methods. This white paper shows you how to do that using the Microsoft SQL Server to Windows Azure helper library (available at ) because in this tool you work with familiar Transact-SQL statements. You can use the tool that works best for you.In this example you create a container called “demoacl” and attach to it a policy called “frcogno Demo ACL Policy” that gives full control to the container and its contains. Note that whenever you see <your storage account> and <your storage account shared key> you should replace those with your storage account name and storage account shared key, respectively.First, create a container.EXEC [Azure].CreateContainer'<your storage account>', '<your storage account shared key>', 1,'demoacl', 'Off';Use the following command to verify that the container was created.SELECT * FROM [Azure].ListContainers('<your storage account>', '<your storage account shared key>', 1, NULL);Using the following stored procedure, create the policy and attach it to the container you just created.EXEC [Azure].AddContainerACL '<your storage account>', '<your storage account shared key>', 1,'demoacl',-- Container name'Off',-- Public access?'frcogno Demo ACL Policy',-- Policy name'2010-01-01',-- Policy validity start'2014-12-31',-- Policy expiration1,-- Give read access?1,-- Give write access?1,-- Give delete blob privilege?1 -- Give list blob privilege?Note that you must specify a validity interval. This example uses a long time span. As best practice you should find the smallest interval that is manageable. Check to see that your own policy is in place.SELECT * FROM [Azure].GetContainerACL('<your storage account>', '<your storage account shared key>', 1,'demoacl', NULL, NULL, NULL); To check the policy, upload a sample blob. You can use a simple text file.DECLARE @buffer VARBINARY(MAX);SET @buffer = CONVERT(VARBINARY(MAX), N'This text comes from SQL Server!' + CONVERT(NVARCHAR, GETDATE()));PRINT @buffer;EXEC [Azure].CreateOrReplaceBlockBlob '<your storage account>', '<your storage account shared key>', 1,'demoacl','sample.txt', @buffer,'text/plain','ucs-2';GONext, check the file in your container.SELECT * FROM [Azure].ListBlobs('<your storage account>', '<your storage account shared key>', 1,'demoacl', 1,1,1,1,NULL)List blobs in a containerBecause the container was created as private, it should not be possible to access the blob without authentication. Confirm this by pasting the blob URI in a browser.Now generate the policy based SAS.SELECT [Azure].GeneratePolicyBlobSharedAccessSignatureURI( ' storage account>.blob.core.demoacl','<your storage account shared key>','c','frcogno Demo ACL Policy');Note that you specified ‘c’ as resource type (c as container). The result should be like this one.You can also test the GET operation with the browser, specifying the blob name instead of the container name to show its contents.If this test is successful, you can use the policy based SAS as a credential for the SQL Server Data Files in Windows Azure database.Metadata tablesIn order to automate the take ownership process, tell SQL Server which database blobs to poll. You also need to store the link between a database and its endpoint. Although there are many ways to achieve that, the simplest approach is to use two tables.Database tableThe database table fields are shown here.Field nameDescriptionAccount NameThe Windows Azure storage account name.Database Name The name of the database as it appears in the sys.database name field.FileThe name of the file as it appears in the sys.master_files name field.TypeThe type of the file as it appears in the sys.master_files type field.FileNameThe physical URI of file as it appears in the sys.master_files physical_name field.For more information, see the following topics in SQL Server Books Online:sys.master_files (Transact-SQL) ().sys.databases (Transact-SQL) ().Endpoint tableField nameDescriptionCertificate thumbprintThe Windows Azure management certificate thumbprint as it appears in the “My” certificate store of the SQL Server engine execution account.Subscription IDThe Windows Azure subscription ID.Service nameThe Windows Azure service name.Deployment slotsThe deployment slot used in the cloud service. By default, this is “Production” but it can be customized. For more information, see “Get Deployment” ().Database Name The name of the database as it appears in the sys.database name field.Endpoint NameAn arbitrary name that describes the endpoint. This must be unique.PortThe port exposed from outside the cloud service. Can be any unused port.Local PortThe local port to bind to the outside port.ProtocolThe protocol to use. Currently only “tcp” is supported.Virtual IP (VIP)The external IP address of the cloud service. Any endpoint row specifies how a database will be accessed. Connect to one of the following pairs: <CloudServiceName>,<Port> <Virtual IP>,<Port> Then ensure that SQL Server is listening to <LocalPort>.The logic ensures that the endpoint is mapped to the right instance; that is, the one that holds the database.Polling stored proceduresAttach databaseAfter the proper metadata tables are in place, use this logic to attach a database.This logic is implemented in the [Azure].[ScanForDetachedDBsAndAttach] stored procedure (for the complete source code, see the appendix).Mount endpointThe mount endpoint operation is slightly more complicated because the database owns the driver. This logic is implemented in the [Azure].[ScanForDBAndMountEndpoints] stored procedure (for the complete source code, see the appendix).Polling jobAfter we have the metadata tables in place, schedule the two stored procedures in the correct order.Step-by-step installation exampleThis step-by-step procedure shows you how to implement a Windows Azure SQL Server cluster with two nodes (one with Instance A and the other with Instance B). It places two databases in high availability: MindFlavorDB and OtherDB. MindFlavorDB will be available at port 1600 and OtherDB at port 1601 of the VIP 137.117.214.251. Before you execute this example, complete the following installation steps:Install Microsoft SQL Server to Windows Azure helper library as instructed in section [D] in the appendix.Install the necessary SQL Server management objects as instructed in section [E] in the appendix.PrerequisitesMake sure to have the requirements detailed in “Implementation” (earlier in this document) in place. Make sure that TCP ports 1600 and 1601 are available.Initial installationOn both instances:Make sure the CLR is enabled. You can use this script.EXEC sp_configure 'clr enabled', 1;GORECONFIGURE;GOMake sure both instances share the same TCP port. The default, 1433, is used in this sample but it is not mandatory.Verify that the Windows Azure management certificate is present in the “My” certificate store of the SQL Server engine execution account. The certificate must have the private key. The certificate must not require a user password for its access. Make sure all the noncontained entities required by both databases are replicated in both instances. This includes the required Windows Azure Storage credential or credentials.On Instance A and Instance B:Download and install the Microsoft SQL Server to Windows Azure helper library (). You can use the script provided in the section “ REF _Ref375142298 \h \* MERGEFORMAT Library Setup script” to create the database. Note that the script will require you to specify the DLL location. The script will create a database called HaOnAzure. Note: if you download an updated version of the helper library, customize the provided setup script so that it creates a database called HaOnAzure.Install the SQL Server Cluster metadata tables and scripts. You can use the one in the REF _Ref375142470 \h Installation section. Note that those objects will be created in the HaOnAzure database.Make sure that one database (MindFlavorDB) is attached to Instance A and the other (OtherDB) is attached to Instance B.ConfigurationOn Instance A:Find the database ID of the database (MindFlavorDB). You can use the sys.databases catalog view.SELECT * FROM sys.databases;Call the stored procedure [HAOnAzure].[PlaceDBOnHA] for the database ID. For example, if a database ID is 10, use the following.EXEC [HAOnAzure].[PlaceDBOnHA] @database_id = 10;Call [HAOnAzure].[UpsertAzureAccountCredentials] to store the Windows Azure account and shared key. EXEC [HAOnAzure].[UpsertAzureAccountCredentials]@account = '<account here>',@secret = '<secret here>';On Instance B repeat the steps you performed on Instance A for the second database in our example:Find the database ID of the database (OtherDB). You can use the sys.databases catalog view.SELECT * FROM sys.databases;Call the stored procedure [HAOnAzure].[PlaceDBOnHA] for the database ID. For example, if a database ID is 10, use the following.EXEC [HAOnAzure].[PlaceDBOnHA] @database_id = 10;Call [HAOnAzure].[UpsertAzureAccountCredentials] to store the Windows Azure account and shared key.EXEC [HAOnAzure].[UpsertAzureAccountCredentials]@account = '<account here>',@secret = '<secret here>'; On either instance (A or B):Insert a row in the [HAOnAzure].[Enpoint] for each endpoint (2).EXEC [HAOnAzure].[InsertOrUpdateEndpointOnHA]'<Certificate Thumbprint>','<Subscription ID>','<Cloud Service name>','MindFlavorDB','TCP-1600',1600,'137.117.214.251',1433,'Production','tcp';GOEXEC [HAOnAzure].[InsertOrUpdateEndpointOnHA]'<Certificate Thumbprint>','<Subscription ID>','<Cloud Service name>','OtherDB','TCP-1601',1601,'137.117.214.251',1433, 'Production','tcp';For more information about the values to specify here, see “Endpoint table” earlier in this document. Check the validity of the configured data. First look at the databases in HA.EXEC [HAOnAzure].[ShowDatabasesOnHA];Check the endpoint configuration.EXEC [HaOnAzure].[ShowEndpoints];Single node checkThis step is optional and is used to ensure the correctness of the step done so far. Perform the following steps on Instance A or Instance B.Call the [HaOnAzure].ScanForDetachedDBsAndAttach stored procedure:EXEC [HaOnAzure].ScanForDetachedDBsAndAttach;Because both databases are attached, you should see output like this.Call [HaOnAzure].ScanForDBAndMountEndpoints to create and bind the endpoints. Because the binding is done only by the database-owning machine you should call it first on both instances.EXEC [HaOnAzure].ScanForDBAndMountEndpoints;This is the first execution, so the REST code creates the endpoint and binds it to Instance A. This call takes around 30 seconds. On Instance B the result should be similar to this.Call [HaOnAzure].ScanForDBAndMountEndpoints again to make sure nothing happens.EXEC [HaOnAzure].ScanForDBAndMountEndpoints;This call should be faster.Test the database failoverThis section is optional. On Instance B:Call the [HaOnAzure].ScanForDetachedDBsAndAttach stored procedure and check that everything works.EXEC [HaOnAzure].ScanForDetachedDBsAndAttach;Call [HaOnAzure].ScanForDBAndMountEndpoints to make sure nothing happens.EXEC [HaOnAzure].ScanForDBAndMountEndpoints;On Instance B:Detach a database.EXEC sp_detach_db [OtherDB];On Instance A:Call the [HaOnAzure].ScanForDetachedDBsAndAttach stored procedure and mount the database. EXEC [HaOnAzure].ScanForDetachedDBsAndAttach;To move the endpoint, call [HaOnAzure].ScanForDBAndMountEndpoints and wait. Because the endpoint must be unmounted first, the wait time is doubled to about 1 minute.EXEC [HaOnAzure].ScanForDBAndMountEndpoints;Test remote connectivity to a database in case of failoverFrom a remote machine, connect to the endpoint for TCP port 1600. Note that you cannot use the SQL Server Browser service.Run this batch.SELECT @@SERVERNAME, * FROM sys.databases WHERE name IN('MindFlavorDB', 'OtherDB') Note that there are two databases on this instance. The physical instance name can be found on the first field (the xxxxCTP2).Perform a failover of database MindFlavorDB. For more information, see the procedure called “ REF _Ref375148026 \h Test the database failover.”At the end of the failover run the same batch again.SELECT @@SERVERNAME, * FROM sys.databases WHERE name IN ('MindFlavorDB', 'OtherDB')The physical server has now changed to “xxxxCTP3”, and the remote connection can be resolved to the correct instance without external intervention; however, because TDS is a stateful protocol, you might have to reauthenticate the connection.Add node to the HA clusterTo add a node to this cluster, follow these steps:Add the virtual machine to the same cloud service.Install the management certificate in the “My” store of the SQL Server engine execution account.Install the HAOnAzure database and call [HAOnAzure].[UpsertAzureAccountCredentials] to add the mandatory local metadata. You can alternatively migrate via backup-restore an existing HAOnAzure database.Monitoring and troubleshooting In every troubleshooting process, the first step is collecting adequate information to diagnose the problem. Usage of SQL Server Data Files in Windows Azure may eventually requires almost the same information used in every traditional SQL Server installation:Guest operating system application and system event logsSQL Server error logSQL Server Profiler traceWindows Azure Management PortalAzure storage logs and analyticsSQL Server performance countersSQL Server wait typesSQL Server xEvents traceTwo important considerations apply to this list:SQL Server masks Windows Azure blob storage level access errors inside traditional SQL Server error messages related to the I/O subsystem. If you want to trap and look at the real Windows Azure error message and code, use the xfcb_failed_request xEvents trace.If you want to troubleshoot SQL Server Data Files in Windows Azure in depth, go to the Windows Azure storage level and enable diagnostics and logging as explained in the next section.There is nothing specific to SQL Server Data Files in Windows Azure in the operating system application and system event logs, the SQL Server error log, or in SQL Server Profiler. The following section focuses on the other information items mentioned in the preceding list. As of this writing, no public trace flag or DBCC command is specifically related to this feature.Windows Azure Management PortalWhen using SQL Server Data Files in Windows Azure, SQL Server 2014 relies directly on Windows Azure blobs that are effectively its storage system. Using the Windows Azure Management Portal you can check basic counters about your storage account on the Monitor tab.Using this web interface, you can see immediately whether the Windows Azure blob storage has experienced availability problems (Availability), the percentage of successful operations (Success Percentage) and the total number of requests sent or received at the storage account level (Total Requests). In addition to the basic metrics displayed by default, you can also add many others. Finally, you can create automatic alerts based on a large number of metrics. If a threshold you select is crossed, Windows Azure subscription administrators receive a notification. Note: If you want to visualize counters and create alerts, enable and configure monitoring for the storage account in the Configure tab.At a minimum, we recommend that you enable alerts for Throttling Error, Availability, Total Requests, Network Error Percentage.Windows Azure Storage comes with a strong SLA on service availability of 99.9 percent. For more information, see the SLA:Windows Azure Storage Service Level Agreement Azure Storage logs and analyticsFor every storage account in Windows Azure, you can (and here you need to) enable analytics and logging at the storage account level to have detailed storage information and errors; you can enable logging for only the blob storage in the Windows Azure Management Portal. For more information about how to configure logging, see the following:How To Monitor a Storage Account After you enable logging for blob storage, you can retrieve information from the $LOGS special container in your storage account location ($logs), using one of the many tools available over Internet to browse Windows Azure Storage. Here is a list of some commonly used tools:Azure Storage Explorer CloudXplorer The $LOGS container stores detailed information about the following:Successful requestsFailed requests, including timeout, throttling, network, authorization, and other errorsRequests that use an SAS, including failed and successful requestsYou can use Microsoft Excel to extract and analyze this information; Excel offers many ways to filter and aggregate data. The following links contain examples:Monitoring Windows Azure VM Disk Performance Storage Analytics Logging - How to Enable and Where to Find the logs you also want a graphical report of how your storage is performing, in terms of latency and number of requests, enable Windows Azure Storage monitoring (in addition to previous logging) and use the tool described in the following blog post:Storage Analytics Viewer In the following figure, you can see some of the useful information you can retrieve using Windows Azure Storage logs and analytics in conjunction with the powerful Excel import and filtering capabilities:operation-type: indicates a write operation on the blob.request-status: shows the success or failure of the operation.end-to-end-latency-in-ms: tracks effective latency of the Windows Azure blob storage.requested-object-key: shows the SQL Server database file being accessed; this information is useful for drilling through latency analysis at the single file level.After you configure Excel to display different types of logging and performance information, you can use it to troubleshoot SQL Server Data Files in Windows Azure. For example, you can:Use column filtering on request-status and search for errors. Aggregate all entries based on requested-object-key, and then calculate minimum, average, and maximum access time using measures on column end-to-end-latency-in-ms. This method is useful for troubleshooting slow performance of database file access. Performance countersA new performance counter object, SQLServer: HTTP Storage, reports information on Windows Azure Storage integration. As you can see in the following screen shot, the object instances you can select are based on the storage accounts used.This list describes each performance counter:Avg. Bytes/Read: The average number of bytes transferred from the HTTP storage per read.Bytes scheduled: The number of bytes that were scheduled in some way (rather than just being directly sent to the underlying miniport).Bytes scheduled/sec: The number of bytes per second from this flow that have been scheduled.Bytes transmitted: The number of bytes from this flow that have been sent.Bytes transmitted/sec: The number of bytes per second from this flow that have been sent.HTTP Storage IO retry/sec: The number of retry requests sent to the HTTP storage per second.Nonconforming packets scheduled: The number of packets that have entered the packet scheduler at a rate that exceeded the flow parameters.Nonconforming packets scheduled/sec: The rate at which nonconforming packets have entered the packet scheduler.Pacer Flow: Statistics from the packet scheduler.Packets scheduled: The number of packets that were scheduled in some way (rather than just being directly sent to the underlying miniport).Packets transmitted: The number of packets from this flow that have been sent.Packets transmitted/sec: The number of packets per second from this flow that have been sent.Reads/Sec: The number of reads per second on the HTTP storage.Transfers/Sec: The number of read and write operations per second on the HTTP storage. SQL Server wait typesThere is no specific wait type for SQL Server Data Files in Windows Azure, but because this feature is related to database files I/O, you can use sys.dm_os_wait_stats SQL Server dynamic management view to search for specific wait types related to I/O like IO_COMPLETION, WRITE_COMPLETION, WRITELOG and PAGEIOLATCH_*. For more information about wait types, see the following topic in SQL Server Books Online:sys.dm_os_wait_stats (Transact-SQL) Here is an example of information about SQL Server Data Files in Windows Azure as viewed through this dynamic management view. If you want information specific to each data or log file for any database, you can use the following query. It returns an ordered list based on descending values for I/O stalls.select DB_NAME(t1.DbId) as 'Database', t2.name as 'File Name', t2.physical_name as 'File Path', t1.IoStallMS, NumberReads, BytesRead, IoStallReadMS, NumberWrites, BytesWritten, IoStallWriteMS, from ::fn_virtualFilestats(NULL,NULL) as t1join sys.master_files as t2 on t1.DbId = t2.database_id and t1.FileId = t2.file_idwhere LEFT(t2.physical_name,5) = 'https'order by t1.IoStallMS descSQL Server xEvents traceThe following new xEvents events support SQL Server Data Files in Windows Azure feature:xfcb_failed_request: A request to Windows Azure Storage failed to complete.xfcb_request_opened: A request was opened to Windows Azure Storage.xfcb_header_obtained: A response header was obtained from the request to Windows Azure Storage.xfcb_blob_properties_obtained: A Windows Azure Storage blob property was obtained from response header.xfcb_read_complete: Read complete from Windows Azure Storage response.xfcb_send_complete: Request send to Windows Azure Storage is complete.xfcb_write_complete: Request send to Windows Azure Storage is completeSQL Server xEvents is a powerful tool that provides granular and in-depth information and detail. It is an excellent technology for trapping low-level Windows Azure blob storage native error codes. Because it is deeply integrated in the SQL Server engine, you can also analyse a complex series of events, even outside of the specific context of this feature.SQL Server 2014 provides an easy way to set up an xEvents session by using a wizard, in addition to Transact-SQL queries.The relevant events for SQL Server Data Files in Windows Azure are located in the io category and Operational channel.For every event, a series of information attributes is available. For example, this is what you can see for the xfcb_failed_request event.xfcb_failed_request is the most important event to use for monitoring Windows Azure blob storage problems. It returns the Windows Azure error code (errorcode), the file or blob affected (file_path), and the number of retries attempted (retry_count) by the SQL Server storage engine. The following screen shot shows an example of a failed request.For more information about SQL Server xEvents, see the following topic in SQL Server Books Online:Extended Events ErrorsTo avoid errors due to unsupported features or limitations, first review SQL Server Data Files in Windows Azure limitations. For more information, see “SQL Server Data Files in Windows Azure” ((v=sql.120).aspx) in SQL Server Books Online. The list of errors that you might encounter when you use the SQL Server 2014 data files in Windows Azure Storage feature follows.Authentication errors Cannot drop the credential '%.*ls' because it is used by an active database file. You may see this error when you try to drop a credential that is still being used by an active database file in Windows Azure Storage. To drop the credential, first you must delete the associated blob that has this database file. For more information, see “ICloudBlob.Delete Method” ( ). To delete a blob that has an active lease, you must first break the lease. For more information, see “ICloudBlob.BreakLease Method” ().Shared Access Signature has not been created on the container correctly. Make sure that you have created a shared access signature (SAS) on the container correctly. Review the instructions given in Lesson 2 in “Tutorial: SQL Server Data Files in Windows Azure Storage service” ((v=sql.120).aspx).SQL Server credential has not been not created correctly. Make sure that you have used 'Shared Access Signature' for the Identity field and created a secret correctly. Review the instructions given in Lesson 3 in “Tutorial: SQL Server Data Files in Windows Azure Storage service” ((v=sql.120).aspx).Database errors Errors when creating a database Review the instructions given in Lesson 4 in “Tutorial: SQL Server Data Files in Windows Azure Storage service” ((v=sql.120).aspx).Errors when running the ALTER statement Make sure to execute the ALTER DATABASE statement when the database is online. When you copy the data files to Windows Azure Storage, always create a page blob, not a block blob. Otherwise, ALTER DATABASE fails. Review the instructions given in Lesson 7 in “Tutorial: SQL Server Data Files in Windows Azure Storage service” ((v=sql.120).aspx).Error code 5120 Unable to open the physical file "%.*ls". Operating system error %d: "%ls" Currently, this new enhancement does not support more than one SQL Server instance accessing the same database files in Windows Azure Storage at the same time. For example, if ServerA is online with an active database file and ServerB, which has a database that points to the same data file, is started, ServerB fails to start the database with the following error: Msg 5120, Level 16, State 140, Line 5Unable to open the physical file "". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)".To resolve this issue, first determine whether you need ServerA to access the database file in Windows Azure Storage. If not, simply remove any connection between ServerA and the database files in Windows Azure Storage. To do this, follow these steps:Set the file path of ServerA to a local folder by using the ALTER DATABASE statement. Take the database offline in ServerA. Copy database files from Windows Azure Storage to the local folder in ServerA. This ensures that ServerA still has a copy of the database locally.Put the database back online.ConclusionIn this white paper we introduced you to SQL Server Data Files in Windows Azure, another big step toward Windows Azure integration in SQL Server 2014. The “Architecture” section covered basic concepts, usage scenarios, and pros and cons to show the benefits of this feature in your SQL Server environments. The “Configuration” section drove through a step-by-step procedure to configure and use SQL Server Data Files in Windows Azure for your existing databases, along with tested best practices and operation management. “Implementing a failover cluster mechanism” provided a real and fully functional example, including complete source code, of a way to design a new high-availability mechanism; we hope this section provided you with a deeper understanding of the power and flexibility of this new technology. Finally, “Monitoring and troubleshooting” explained what you need to know to support, monitor, and troubleshoot this new SQL Server database deployment model, highlighting the differences between the new model and traditional on-premises environments.For more information:: SQL Server Web site: SQL Server TechCenter : SQL Server DevCenter SQL Server Integration with Windows Azure Storage(v=sql.120).aspx Tutorial: Integrating SQL Server Data Files in Windows Azure Storage service(v=sql.120).aspx Tutorial: SQL Server Data Files in Windows Azure Storage service(v=sql.120).aspx Performance Guidance for SQL Server in Windows Azure Virtual Machines Virtual Machine and Cloud Service Sizes for Windows Azure Getting Started with SQL Server in Windows Azure Virtual Machines SQL Server Engineers SQL Server Database Backup in Azure IaaS: performance tests SQL Server 2012 AlwaysOn Availability Group and Listener in Azure VMs: Notes, Details and Recommendations Did this paper help you? Please give us your feedback. Tell us on a scale of 1 (poor) to 5 (excellent), how would you rate this paper and why have you given it this rating? For example:Are you rating it high due to having good examples, excellent screen shots, clear writing, or another reason? Are you rating it low due to poor examples, fuzzy screen shots, or unclear writing?This feedback will help us improve the quality of white papers we release. Send feedbackAppendixWait stats analysisWITH [Waits] AS (SELECT [wait_type], [wait_time_ms] / 1000.0 AS [WaitS], ([wait_time_ms] - [signal_wait_time_ms]) / 1000.0 AS [ResourceS], [signal_wait_time_ms] / 1000.0 AS [SignalS], [waiting_tasks_count] AS [WaitCount], 100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage], ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum] FROM sys.dm_os_wait_stats WHERE [wait_type] NOT IN ( N'CLR_SEMAPHORE', N'LAZYWRITER_SLEEP', N'RESOURCE_QUEUE', N'SQLTRACE_BUFFER_FLUSH', N'SLEEP_TASK', N'SLEEP_SYSTEMTASK', N'WAITFOR', N'HADR_FILESTREAM_IOMGR_IOCOMPLETION', N'CHECKPOINT_QUEUE', N'REQUEST_FOR_DEADLOCK_SEARCH', N'XE_TIMER_EVENT', N'XE_DISPATCHER_JOIN', N'LOGMGR_QUEUE', N'FT_IFTS_SCHEDULER_IDLE_WAIT', N'BROKER_TASK_STOP', N'CLR_MANUAL_EVENT', N'CLR_AUTO_EVENT', N'DISPATCHER_QUEUE_SEMAPHORE', N'TRACEWRITE', N'XE_DISPATCHER_WAIT', N'BROKER_TO_FLUSH', N'BROKER_EVENTHANDLER', N'FT_IFTSHC_MUTEX', N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP', N'DIRTY_PAGE_POLL', N'SP_SERVER_DIAGNOSTICS_SLEEP') )SELECT [W1].[wait_type] AS [WaitType], CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_S], CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_S], CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_S], [W1].[WaitCount] AS [WaitCount], CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Percentage], CAST (([W1].[WaitS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgWait_S], CAST (([W1].[ResourceS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgRes_S], CAST (([W1].[SignalS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgSig_S]FROM [Waits] AS [W1]INNER JOIN [Waits] AS [W2] ON [W2].[RowNum] <= [W1].[RowNum]GROUP BY [W1].[RowNum], [W1].[wait_type], [W1].[WaitS], [W1].[ResourceS], [W1].[SignalS], [W1].[WaitCount], [W1].[Percentage]HAVING SUM ([W2].[Percentage]) - [W1].[Percentage] < 95; -- percentage thresholdEnabling Transparent Data Encryption (TDE)-- Create a master key and a server certificate.USE masterGOCREATE MASTER KEY ENCRYPTION BY PASSWORD = 'MySQLKey01';GOCREATE CERTIFICATE MySQLCert WITH SUBJECT = 'SQL - Azure Storage Certification'GO -- Create a backup of the server certificate in the master database.-- Store TDE certificates in the source machine locally.BACKUP CERTIFICATE MySQLCertTO FILE = 'C:\certs\MySQLCert.CER'WITH PRIVATE KEY(FILE = 'C:\certs\MySQLPrivateKeyFile.PVK',ENCRYPTION BY PASSWORD = 'MySQLKey01'); -- Then, encrypt your new database by following these steps: -- Switch to the new database.-- Create a database encryption key that is protected by the server certificate in the --- master database. -- Alter the new database to encrypt the database using TDE.USE TestDB1;GO-- Encrypt your database.CREATE DATABASE ENCRYPTION KEYWITH ALGORITHM = AES_256ENCRYPTION BY SERVER CERTIFICATE MySQLCertGOALTER DATABASE TestDB1SET ENCRYPTION ONGODownloading Database file blobs using a Valet Key PatternWhen you delegate access to a resource, in some cases you might want to restrict the functionality you grant, and you also need the ability to revoke those grants. The traditional approach is to define access control lists (ACLs) that detail the permission matrix. However, ACLs require the ability to verify the identify the account (even Everyone is a special kind of identity). Another approach doesn’t require you to identify the grantee: the required privilege is granted by the ownership of a specific token. This second approach, known as a Valet Key pattern, is like giving a valet a key to park your car: a valet doesn’t need to own your car in order to park it. All that is necessary is a key that you provide. Windows Azure storage implements the Valet Key pattern using shared account signature (SAS) URIs. SAS URIs are static (that is, immutable) URIs that embed both the resource identification and the granted privilege. For example, the following URI gives you write and list access to the sascontainer in the storageaccount storage account from 2012-02-12 until 2013-04-13. using this URI can write and list the specified container. For more information, see “Shared Access Signatures, Part 1: Understanding the SAS Model” ().The SAS URIs are used by SQL Server Data Files in Windows Azure to access the required database: it can be mapped on a stored access policy instead of a resource directly. The advantage of giving a stored access policy mapped URI instead of a direct one is that you can revoke the policy at any time. Revoking a stored access policy invalidates the relative SAS URIs. If you give a direct SAS URI you cannot revoke it, short of invalidating the shared encryption keys altogether. You can generate your own SAS URI and use it with HTTP programs, such as Microsoft Internet Explorer and Google Chrome, that don’t understand the Windows Azure REST APIs (and therefore are unable to authenticate the requests correctly). The suggested workflow in this case is:Generate a SAS policy for the blob to download. The policy should:Be specific to the container.Grant only read access (following the minimum privilege principle).Have a very narrow timeframe. The policy should start immediately and expire as soon as the download operation terminates. However, there may be slight timer inconsistencies with the virtual machine: plan accordingly. Generate the SAS URI using the SAS policy previously generated.Download the blob using your HTTP(s) program of choice. Point it to the SAS URI generated in step 2.Invalidate the SAS policy.The last step is optional, but it is a best practice. This workflow can be used, for example, to download a database backup saved on a Windows Azure container (available from SQL Server 2012 SP1 CU2). For more information, see the following links:SQL Server Backup and Restore with Windows Azure Blob Storage Service ().Cumulative update package 2 for SQL Server 2012 Service Pack 1 ().Even if the TCP protocol is robust, there still is a chance of receiving a corrupt blob. There are many ways of ensuring the consistency of a download; Windows Azure supports the MD5 hash in a way that depends of the type of blob (page or block). For more information about different types of blobs, see “Understanding Block Blobs and Page Blobs” ().Internet browsers are not aware of the Windows Azure MD5 hash, so you cannot rely on them for this particular issue. You should perform a server-side hashing and compare it to a client-side hash. If the hashes do not match, you must download it again. Consider this example. You want to download the blob . Following best practices, the container is private (that is, public access is disabled) and you need to use the REST API to access it. Here are the steps you need to follow:Step 1EXEC [Azure].[AddContainerACL]'enosg', -- container name'<shared key>', -- shared key1,-- use HTTPS'frcogno',-- container name'Off',-- container public access'testACL',-- SAS policy name'2014-01-01',-- valid from'2014-02-01',-- valid to1,-- can read?0,-- can write?0,-- can delete blobs? 0-- can list blobs?NULL, 0, NULLStep 2SELECT [Azure].[GeneratePolicyBlobSharedAccessSignatureURI]('','<shared key>','b','testACL')Step 3EXEC [Azure].[RemoveContainerACL]'enosg', '<shared key>', 1,'frcogno','testACL'Microsoft SQL Server to Windows Azure helper libraryThis library is available at no cost from CodePlex at . The sample cluster implementation was built using version 1.7.0.2 (change-set 67b8b2385f7a). Because the library is likely to be updated, the SQL CLR import scripts are included in this appendix. You can download the required binaries from the CodePlex page.Library setup script-- *********************************************************************************-- In SQL Server Management Studio, open this file and press CTRL+Shift+M to ********-- configure it BEFORE executing it.-- *********************************************************************************USE [master];GOCREATE ASYMMETRIC KEY [AzureKey] FROM EXECUTABLE FILE = '<ITPCfSQL.Azure.CLR.dll path, nvarchar(4000), DLLPath>'CREATE LOGIN [AzureLogin] FROM ASYMMETRIC KEY [AzureKey];GRANT EXTERNAL ACCESS ASSEMBLY TO [AzureLogin];GOCREATE DATABASE HaOnAzure;GOALTER DATABASE [HaOnAzure] SET RECOVERY SIMPLE;GOUSE [HaOnAzure];GOEXEC sp_changedbowner 'sa';GOCREATE SCHEMA [Azure];GOCREATE SCHEMA [Azure.Embedded];GOCREATE SCHEMA [Azure.Management];GOCREATE ASSEMBLY [ITPCfSQL.Azure.CLR] FROM '<ITPCfSQL.Azure.CLR.dll path, nvarchar(4000), DLLPath>'WITH PERMISSION_SET=EXTERNAL_ACCESS;GO--------- Std--------- BlobCREATE FUNCTION Azure.ListContainers(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX), ETag NVARCHAR(255), LastModified DATETIME, LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus NVARCHAR(255)) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ListContainers;GOCREATE PROCEDURE Azure.CreateContainer(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @ContainerPublicReadAccess NVARCHAR(255), @xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateContainer;GOCREATE PROCEDURE Azure.DeleteContainer(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER = NULL, @xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DeleteContainer;GOCREATE FUNCTION Azure.GetContainerACL(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,@ContainerName NVARCHAR(255),@LeaseId UNIQUEIDENTIFIER = NULL,@TimeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS TABLE(Id NVARCHAR(64), Start DATETIME, Expiry DATETIME, Permission NVARCHAR(255), ContainerPublicReadAccess NVARCHAR(255)) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].GetContainerACL;GOCREATE PROCEDURE Azure.ChangeContainerPublicAccessMethod(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @ContainerPublicReadAccess NVARCHAR(255), @LeaseId UNIQUEIDENTIFIER = NULL, @timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ChangeContainerPublicAccessMethod;GOCREATE PROCEDURE Azure.AddContainerACL(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @ContainerPublicReadAccess NVARCHAR(255),@accessPolicyId NVARCHAR(64), @start DATETIME, @expiry DATETIME, @canRead BIT = 0, @canWrite BIT = 0, @canDeleteBlobs BIT = 0, @canListBlobs BIT = 0, @LeaseId UNIQUEIDENTIFIER = NULL, @timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].AddContainerACL;GOCREATE PROCEDURE Azure.RemoveContainerACL(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @accessPolicyId NVARCHAR(64), @LeaseId UNIQUEIDENTIFIER = NULL, @timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].RemoveContainerACL;GOCREATE FUNCTION Azure.ListBlobs(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,@container NVARCHAR(4000), @includeSnapshots BIT, @includeMetadata BIT, @includeCopy BIT, @includeUncommittedBlobs BIT,@xmsclientrequestId NVARCHAR(4000) = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX), ETag NVARCHAR(255), BlobSequenceNumber INT, BlobType NVARCHAR(255), ContentEncoding NVARCHAR(255), ContentLanguage NVARCHAR(255), ContentLength BIGINT, ContentMD5 NVARCHAR(255), ContentType NVARCHAR(255), CopyId UNIQUEIDENTIFIER, CopySource NVARCHAR(255), CopyStatus NVARCHAR(255), CopyCurrentPosition BIGINT, CopyTotalLength BIGINT, CopyCompletionTime DATETIME, LastModified DATETIME, LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus NVARCHAR(255), Metadata XML) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ListBlobs;GOCREATE FUNCTION Azure.DownloadBlockBlob(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,@container NVARCHAR(4000), @blobName NVARCHAR(4000), @xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS VARBINARY(MAX)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DownloadBlockBlob;GOCREATE FUNCTION Azure.DownloadPageBlob(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,@container NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER = NULL,@startPosition BIGINT = NULL, @length INT = NULL,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS VARBINARY(MAX)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DownloadPageBlob;GOCREATE PROCEDURE Azure.CreateOrReplaceBlockBlob(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @buffer VARBINARY(MAX), @contentType NVARCHAR(255) = 'application/octect-stream', @contentEncoding NVARCHAR(255) = NULL,@contentMD5 NVARCHAR(255) = NULL,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplaceBlockBlob;GOCREATE PROCEDURE Azure.PutPage(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @buffer VARBINARY(MAX), @startPositionBytes BIGINT, @bytesToUpload INT = NULL,@leaseId UNIQUEIDENTIFIER = NULL,@contentMD5 NVARCHAR(255) = NULL,@timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].PutPage;GOCREATE FUNCTION Azure.PutPageFunction(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @buffer VARBINARY(MAX), @startPositionBytes BIGINT, @bytesToUpload INT = NULL,@leaseId UNIQUEIDENTIFIER = NULL,@contentMD5 NVARCHAR(255) = NULL,@timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS NVARCHAR(MAX)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].PutPage_Function;GOCREATE FUNCTION Azure.CreateOrReplaceBlockBlobFunction(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @buffer VARBINARY(MAX), @contentType NVARCHAR(255) = 'application/octet-stream', @contentEncoding NVARCHAR(255) = NULL,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS NVARCHAR(MAX)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplaceBlockBlob_Function;GOCREATE PROCEDURE Azure.DeleteBlob(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @blobDeletionMethod NVARCHAR(255), @leaseID UNIQUEIDENTIFIER = NULL, @snapshotDateTimeToDelete DATETIME = NULL,@xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DeleteBlob;GOCREATE PROCEDURE Azure.CopyBlob(@destinationAccount NVARCHAR(255), @destinationSharedKey NVARCHAR(255), @useHTTPS bit, @sourceAccountName NVARCHAR(255),@sourceContainerName NVARCHAR(4000), @sourceBlobName NVARCHAR(4000), @sourceLeaseId UNIQUEIDENTIFIER = NULL, @destinationLeaseId UNIQUEIDENTIFIER = NULL, @destinationContainerName NVARCHAR(4000), @destinationBlobName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CopyBlob;GOCREATE FUNCTION Azure.GetBlobProperties(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,@container NVARCHAR(4000), @blob NVARCHAR(4000), @snapshotDateTime DATETIME = NULL, @xmsclientrequestId NVARCHAR(4000) = NULL)RETURNS TABLE( ETag NVARCHAR(255), BlobSequenceNumber BIGINT, BlobType NVARCHAR(255), ContentEncoding NVARCHAR(255), ContentLanguage NVARCHAR(255), ContentLength BIGINT, ContentMD5 NVARCHAR(255), ContentType NVARCHAR(255), CopyId UNIQUEIDENTIFIER, CopySource NVARCHAR(255), CopyStatus NVARCHAR(255), CopyStatusDescription NVARCHAR(4000), CopyCurrentPosition BIGINT, CopyTotalLength BIGINT, CopyCompletionTime DATETIME, LastModified DATETIME, LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus NVARCHAR(255), CacheControl NVARCHAR(255), [Date] DATETIME, RequestId UNIQUEIDENTIFIER, [Version] NVARCHAR(255) ) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].GetBlobProperties;GOCREATE PROCEDURE [Azure].UpdateBlobMetadata(@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,@container NVARCHAR(4000), @blob NVARCHAR(4000), @attributeList XML,@leaseId UNIQUEIDENTIFIER = NULL, @timeoutSeconds INT = 60, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].UpdateBlobMetadata;GOCREATE PROCEDURE Azure.CreateOrReplacePageBlob(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @maximumSize BIGINT, @leaseId UNIQUEIDENTIFIER = NULL,@contentType NVARCHAR(255) = 'application/octet-stream', @contentEncoding NVARCHAR(255) = NULL,@contentLanguage NVARCHAR(255) = NULL,@timeout INT = NULL,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplacePageBlob;GO-- QueueCREATE PROCEDURE [Azure].CreateQueue(@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1, @queueName NVARCHAR(255), @timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].CreateQueue;GOCREATE FUNCTION [Azure].ListQueues(@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,@prefix NVARCHAR(255) = NULL, @includeMetadata BIT = 0, @timeoutSeconds INT = 60, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(4000), Metadata XML) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].ListQueues;GOCREATE FUNCTION [Azure].ListQueues_Faulty(@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,@prefix NVARCHAR(255) = NULL, @includeMetadata BIT = 0, @timeoutSeconds INT = 60, @xmsclientrequestId NVARCHAR(255) = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(4000), Metadata XML) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].ListQueues_Faulty;GOCREATE PROCEDURE [Azure].EnqueueMessage(@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,@queueName NVARCHAR(255), @xmlMessage XML, @timeoutSeconds INT = 60, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].EnqueueMessage;GOCREATE PROCEDURE [Azure].DequeueMessage(@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,@queueName NVARCHAR(255), @visibilityTimeoutSeconds INT = 10, -- lower time might recover from failure faster. @timeoutSeconds INT = 60, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].DequeueMessage;GOCREATE PROCEDURE [Azure].RetrieveApproximateMessageCount(@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,@queueName NVARCHAR(255), @timeoutSeconds INT = 60, @xmsclientrequestId NVARCHAR(255) = NEWID)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].RetrieveApproximateMessageCount;GO-- TableCREATE PROCEDURE Azure.CreateTable(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @TableName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].CreateTable;GOCREATE PROCEDURE Azure.DropTable(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @TableName NVARCHAR(4000), @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].DropTable;GOCREATE FUNCTION Azure.ListTables(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @xmsclientrequestId NVARCHAR(4000) = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX)) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].ListTables;GOCREATE FUNCTION Azure.QueryTable(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @tableName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)RETURNS TABLE(PartitionKey NVARCHAR(4000), RowKey NVARCHAR(4000), [TimeStamp] DATETIME, Attributes XML) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].QueryTable;GOCREATE PROCEDURE Azure.DeleteEntity(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @TableName NVARCHAR(4000), @PartitionKey NVARCHAR(4000), @RowKey NVARCHAR(4000),@xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].DeleteEntity;GOCREATE PROCEDURE Azure.InsertOrReplaceEntity(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @TableName NVARCHAR(4000), @PartitionKey NVARCHAR(4000), @RowKey NVARCHAR(4000), @AttributeList XML,@xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].InsertOrReplaceEntity;GO------------Embedded------------ BlobCREATE FUNCTION [Azure.Embedded].ListContainers(@LogicalConnectionName NVARCHAR(255),@xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX), ETag NVARCHAR(255), LastModified DATETIME, LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus NVARCHAR(255)) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ListContainers_Embedded;GOCREATE PROCEDURE [Azure.Embedded].CreateContainer(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @ContainerPublicReadAccess NVARCHAR(255), @xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateContainer_Embedded;GOCREATE PROCEDURE [Azure.Embedded].DeleteContainer(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER = NULL, @xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DeleteContainer_Embedded;GOCREATE FUNCTION [Azure.Embedded].ListBlobs(@LogicalConnectionName NVARCHAR(255),@container NVARCHAR(4000), @includeSnapshots BIT, @includeMetadata BIT, @includeCopy BIT, @includeUncommittedBlobs BIT,@xmsclientrequestId NVARCHAR(4000) = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX), ETag NVARCHAR(255), BlobSequenceNumber INT, BlobType NVARCHAR(255), ContentEncoding NVARCHAR(255), ContentLanguage NVARCHAR(255), ContentLength BIGINT, ContentMD5 NVARCHAR(255), ContentType NVARCHAR(255), CopyId UNIQUEIDENTIFIER, CopySource NVARCHAR(255), CopyStatus NVARCHAR(255), CopyCurrentPosition BIGINT, CopyTotalLength BIGINT, CopyCompletionTime DATETIME, LastModified DATETIME, LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus NVARCHAR(255), Metadata XML) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ListBlobs_Embedded;GOCREATE FUNCTION [Azure.Embedded].DownloadBlockBlob(@LogicalConnectionName NVARCHAR(255),@container NVARCHAR(4000), @blobName NVARCHAR(4000), @xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS VARBINARY(MAX)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DownloadBlockBlob_Embedded;GOCREATE FUNCTION [Azure.Embedded].DownloadPageBlob(@LogicalConnectionName NVARCHAR(255),@container NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER = NULL,@startPosition BIGINT = NULL, @length INT = NULL,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS VARBINARY(MAX)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DownloadPageBlob_Embedded;GOCREATE PROCEDURE [Azure.Embedded].CreateOrReplaceBlockBlob(@LogicalConnectionName NVARCHAR(255),@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @buffer VARBINARY(MAX), @contentType NVARCHAR(255) = 'application/octect-stream', @contentEncoding NVARCHAR(255) = NULL,@contentMD5 NVARCHAR(255) = NULL,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplaceBlockBlob_Embedded;GOCREATE FUNCTION [Azure.Embedded].CreateOrReplaceBlockBlobFunction(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @buffer VARBINARY(MAX), @contentType NVARCHAR(255) = 'application/octect-stream', @contentEncoding NVARCHAR(255) = NULL,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS NVARCHAR(MAX)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplaceBlockBlob_Function_Embedded;GOCREATE PROCEDURE [Azure.Embedded].DeleteBlob(@LogicalConnectionName NVARCHAR(255),@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @blobDeletionMethod NVARCHAR(255), @leaseID UNIQUEIDENTIFIER = NULL, @snapshotDateTimeToDelete DATETIME = NULL,@xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DeleteBlob_Embedded;GOCREATE PROCEDURE [Azure.Embedded].CopyBlob(@DestinationLogicalConnectionName NVARCHAR(255),@sourceAccountName NVARCHAR(255),@sourceContainerName NVARCHAR(4000), @sourceBlobName NVARCHAR(4000), @sourceLeaseId UNIQUEIDENTIFIER = NULL, @destinationLeaseId UNIQUEIDENTIFIER = NULL, @destinationContainerName NVARCHAR(4000), @destinationBlobName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CopyBlob_Embedded;GOCREATE FUNCTION [Azure.Embedded].GetBlobProperties(@LogicalConnectionName NVARCHAR(255),@container NVARCHAR(4000), @blob NVARCHAR(4000), @snapshotDateTime DATETIME = NULL, @xmsclientrequestId NVARCHAR(4000) = NULL)RETURNS TABLE( ETag NVARCHAR(255), BlobSequenceNumber BIGINT, BlobType NVARCHAR(255), ContentEncoding NVARCHAR(255), ContentLanguage NVARCHAR(255), ContentLength BIGINT, ContentMD5 NVARCHAR(255), ContentType NVARCHAR(255), CopyId UNIQUEIDENTIFIER, CopySource NVARCHAR(255), CopyStatus NVARCHAR(255), CopyStatusDescription NVARCHAR(4000), CopyCurrentPosition BIGINT, CopyTotalLength BIGINT, CopyCompletionTime DATETIME, LastModified DATETIME, LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus NVARCHAR(255), CacheControl NVARCHAR(255), [Date] DATETIME, RequestId UNIQUEIDENTIFIER, [Version] NVARCHAR(255) ) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].GetBlobProperties_Embedded;GOCREATE PROCEDURE [Azure.Embedded].UpdateBlobMetadata(@LogicalConnectionName NVARCHAR(255),@container NVARCHAR(4000), @blob NVARCHAR(4000), @attributeList XML,@leaseId UNIQUEIDENTIFIER = NULL, @timeoutSeconds INT = 60, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].UpdateBlobMetadata_Embedded;GOCREATE PROCEDURE [Azure.Embedded].CreateOrReplacePageBlob( @logicalConnectionName NVARCHAR(255),@containerName NVARCHAR(255), @blobName NVARCHAR(255),@maximumSize BIGINT, @leaseId UNIQUEIDENTIFIER = NULL,@contentType NVARCHAR(255) = 'application/octect-stream', @contentEncoding NVARCHAR(255) = NULL,@contentLanguage NVARCHAR(255) = NULL,@timeout INT = NULL,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplacePageBlob_Embedded;GOCREATE PROCEDURE [Azure.Embedded].PutPage(@logicalConnectionName NVARCHAR(255),@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @buffer VARBINARY(MAX), @startPositionBytes INT, @bytesToUpload INT = NULL,@leaseId UNIQUEIDENTIFIER = NULL,@contentMD5 NVARCHAR(255) = NULL,@timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].PutPage_Embedded;GOCREATE FUNCTION [Azure.Embedded].PutPageFunction(@logicalConnectionName NVARCHAR(255),@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000), @buffer VARBINARY(MAX), @startPositionBytes BIGINT, @bytesToUpload INT = NULL,@leaseId UNIQUEIDENTIFIER = NULL,@contentMD5 NVARCHAR(255) = NULL,@timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS NVARCHAR(MAX)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].PutPage_Function_Embedded;GO-- QueueCREATE PROCEDURE [Azure.Embedded].CreateQueue(@logicalConnectionName NVARCHAR(255), @queueName NVARCHAR(255), @timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].CreateQueue_Embedded;GOCREATE FUNCTION [Azure.Embedded].ListQueues(@logicalConnectionName NVARCHAR(255),@prefix NVARCHAR(255) = NULL, @includeMetadata BIT = 0, @timeoutSeconds INT = 60, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(4000), Metadata XML) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].ListQueues_Embedded;GOCREATE FUNCTION [Azure.Embedded].ListQueues_Faulty(@logicalConnectionName NVARCHAR(255),@prefix NVARCHAR(255) = NULL, @includeMetadata BIT = 0, @timeoutSeconds INT = 60, @xmsclientrequestId NVARCHAR(255) = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(4000), Metadata XML) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].ListQueues_Faulty_Embedded;GOCREATE PROCEDURE [Azure.Embedded].EnqueueMessage(@logicalConnectionName NVARCHAR(255),@queueName NVARCHAR(255), @xmlMessage XML, @timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].EnqueueMessage_Embedded;GOCREATE PROCEDURE [Azure.Embedded].DequeueMessage(@logicalConnectionName NVARCHAR(255),@queueName NVARCHAR(255), @visibilityTimeoutSeconds INT = 10, -- lower time might recover from failure faster. @timeoutSeconds INT = 60, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].DequeueMessage_Embedded;GOCREATE PROCEDURE [Azure.Embedded].RetrieveApproximateMessageCount(@logicalConnectionName NVARCHAR(255),@queueName NVARCHAR(255), @timeoutSeconds INT = 60, @xmsclientrequestId NVARCHAR(255) = NEWID)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].RetrieveApproximateMessageCount_Embedded;GO-- TableCREATE PROCEDURE [Azure.Embedded].CreateTable(@LogicalConnectionName NVARCHAR(255), @TableName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].CreateTable_Embedded;GOCREATE FUNCTION [Azure.Embedded].ListTables(@LogicalConnectionName NVARCHAR(255), @xmsclientrequestId NVARCHAR(4000) = NULL)RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX)) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].ListTables_Embedded;GOCREATE FUNCTION [Azure.Embedded].QueryTable(@LogicalConnectionName NVARCHAR(255), @tableName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)RETURNS TABLE(PartitionKey NVARCHAR(4000), RowKey NVARCHAR(4000), [TimeStamp] DATETIME, Attributes XML) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].QueryTable_Embedded;GOCREATE PROCEDURE [Azure.Embedded].DeleteEntity(@LogicalConnectionName NVARCHAR(255), @TableName NVARCHAR(4000), @PartitionKey NVARCHAR(4000), @RowKey NVARCHAR(4000),@xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].DeleteEntity_Embedded;GOCREATE PROCEDURE [Azure.Embedded].InsertOrReplaceEntity(@LogicalConnectionName NVARCHAR(255), @TableName NVARCHAR(4000), @PartitionKey NVARCHAR(4000), @RowKey NVARCHAR(4000), @AttributeList XML,@xmsclientrequestId NVARCHAR(4000) = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].InsertOrReplaceEntity_Embedded;GO-------- General purpose-----CREATE FUNCTION [Azure].ToXmlDate (@dt DATETIME)RETURNS NVARCHAR(MAX) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlDate;GOCREATE FUNCTION [Azure].ToXmlString (@txt NVARCHAR(MAX))RETURNS NVARCHAR(MAX) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlString;GOCREATE FUNCTION [Azure].ToXmlInt64 (@i BIGINT)RETURNS NVARCHAR(MAX) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlInt64;GOCREATE FUNCTION [Azure].ToXmlDouble (@d FLOAT)RETURNS NVARCHAR(MAX) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlDouble;GOCREATE FUNCTION [Azure].ToXmlBinary(@d VARBINARY(MAX))RETURNS NVARCHAR(MAX) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlBinary;GOCREATE FUNCTION [Azure].ToXmlGuid(@d UNIQUEIDENTIFIER)RETURNS NVARCHAR(MAX) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlGuid;GOCREATE FUNCTION [Azure].ToXmlStatement(@stmt NVARCHAR(MAX))RETURNS NVARCHAR(MAX) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlStatement;GOCREATE FUNCTION [Azure].ComputeMD5AsBase64(@byteArray VARBINARY(MAX))RETURNS NVARCHAR(MAX) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ComputeMD5AsBase64;GOCREATE FUNCTION [Azure].GetFileSizeBytes(@fileName NVARCHAR(4000))RETURNS BIGINT EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GetFileSizeBytes;GOCREATE FUNCTION [Azure].GetFileBlock(@fileName NVARCHAR(4000),@offsetBytes BIGINT,@lengthBytes INT,@fileShareOption NVARCHAR(255) = 'Read' )RETURNS VARBINARY(MAX) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GetFileBlock;GO-------- Blob Lease------- DirectCREATE PROCEDURE Azure.AcquireBlobFixedLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseDuration INT = 60, @proposedLeaseId UNIQUEIDENTIFIER = NULL,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireBlobFixedLease;GOCREATE PROCEDURE Azure.AcquireBlobInfiniteLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @proposedLeaseId UNIQUEIDENTIFIER = NULL,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireBlobInfiniteLease;GOCREATE PROCEDURE Azure.RenewBlobLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].RenewBlobLease;GOCREATE PROCEDURE Azure.ChangeBlobLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@proposedLeaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ChangeBlobLease;GOCREATE PROCEDURE Azure.ReleaseBlobLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ReleaseBlobLease;GOCREATE PROCEDURE Azure.BreakBlobLeaseWithGracePeriod(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseBreakPeriod INT = 15,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakBlobLeaseWithGracePeriod;GOCREATE PROCEDURE Azure.BreakBlobLeaseImmediately(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakBlobLeaseImmediately;GO-- EmbeddedCREATE PROCEDURE [Azure.Embedded].AcquireBlobFixedLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseDuration INT = 60, @proposedLeaseId UNIQUEIDENTIFIER = NULL,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireBlobFixedLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].AcquireBlobInfiniteLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @proposedLeaseId UNIQUEIDENTIFIER = NULL,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireBlobInfiniteLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].RenewBlobLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].RenewBlobLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].ChangeBlobLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@proposedLeaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ChangeBlobLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].ReleaseBlobLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ReleaseBlobLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].BreakBlobLeaseWithGracePeriod(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @leaseBreakPeriod INT = 15,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakBlobLeaseWithGracePeriod_Embedded;GOCREATE PROCEDURE [Azure.Embedded].BreakBlobLeaseImmediately(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000), @timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakBlobLeaseImmediately_Embedded;GO-------- Container Lease------- DirectCREATE PROCEDURE Azure.AcquireContainerFixedLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000),@leaseDuration INT = 60, @proposedLeaseId UNIQUEIDENTIFIER = NULL,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireContainerFixedLease;GOCREATE PROCEDURE Azure.AcquireContainerInfiniteLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000),@proposedLeaseId UNIQUEIDENTIFIER = NULL,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireContainerInfiniteLease;GOCREATE PROCEDURE Azure.RenewContainerLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].RenewContainerLease;GOCREATE PROCEDURE Azure.ChangeContainerLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@proposedLeaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ChangeContainerLease;GOCREATE PROCEDURE Azure.ReleaseContainerLease(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ReleaseContainerLease;GOCREATE PROCEDURE Azure.BreakContainerLeaseWithGracePeriod(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @leaseBreakPeriod INT = 15,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakContainerLeaseWithGracePeriod;GOCREATE PROCEDURE Azure.BreakContainerLeaseImmediately(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakContainerLeaseImmediately;GO-- EmbeddedCREATE PROCEDURE [Azure.Embedded].AcquireContainerFixedLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @leaseDuration INT = 60, @proposedLeaseId UNIQUEIDENTIFIER = NULL,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireContainerFixedLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].AcquireContainerInfiniteLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @proposedLeaseId UNIQUEIDENTIFIER = NULL,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireContainerInfiniteLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].RenewContainerLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000),@leaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].RenewContainerLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].ChangeContainerLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@proposedLeaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ChangeContainerLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].ReleaseContainerLease(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ReleaseContainerLease_Embedded;GOCREATE PROCEDURE [Azure.Embedded].BreakContainerLeaseWithGracePeriod(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @leaseBreakPeriod INT = 15,@timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakContainerLeaseWithGracePeriod_Embedded;GOCREATE PROCEDURE [Azure.Embedded].BreakContainerLeaseImmediately(@LogicalConnectionName NVARCHAR(255), @ContainerName NVARCHAR(4000), @timeoutSeconds INT = 0,@xmsclientrequestId UNIQUEIDENTIFIER = NULL)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakContainerLeaseImmediately_Embedded;GOCREATE FUNCTION [Azure].GetContainerFromUri(@resourceUri NVARCHAR(4000))RETURNS NVARCHAR(255) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GetContainerFromUri;GOCREATE FUNCTION [Azure].GenerateBlobSharedAccessSignatureURI(@resourceUri NVARCHAR(4000),@sharedKey NVARCHAR(4000), @permissions NVARCHAR(8),@resourceType NVARCHAR(4),@validityStart DATETIME, @validityEnd DATETIME, @identifier NVARCHAR(4000))RETURNS NVARCHAR(255) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GenerateBlobSharedAccessSignatureURI;GOCREATE FUNCTION [Azure].GenerateDirectBlobSharedAccessSignatureURI(@resourceUri NVARCHAR(4000),@sharedKey NVARCHAR(4000), @permissions NVARCHAR(8),@resourceType NVARCHAR(4),@validityStart DATETIME, @validityEnd DATETIME)RETURNS NVARCHAR(255) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GenerateDirectBlobSharedAccessSignatureURI;GOCREATE FUNCTION [Azure].GeneratePolicyBlobSharedAccessSignatureURI(@resourceUri NVARCHAR(4000),@sharedKey NVARCHAR(4000),@resourceType NVARCHAR(4), @identifier NVARCHAR(4000))RETURNS NVARCHAR(255) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GeneratePolicyBlobSharedAccessSignatureURI;GO--------------------- Management ---------------------CREATE FUNCTION [Azure.Management].GetServices(@certificateThumbprint NVARCHAR(255), @subscriptionId UNIQUEIDENTIFIER) RETURNS TABLE(ServiceNameNVARCHAR(4000),UrlNVARCHAR(4000),DefaultWinRmCertificateThumbprintNVARCHAR(255),AffinityGroupNVARCHAR(4000),DateCreatedDATETIME,DateLastModifiedDATETIME,[Description]NVARCHAR(MAX),LabelNVARCHAR(4000),LocationNVARCHAR(4000),[Status]NVARCHAR(255)) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].GetServices;GOCREATE FUNCTION [Azure.Management].GetDeploymentsPersistentVMRolesWithInputEndpoints(@certificateThumbprint NVARCHAR(255), @subscriptionId UNIQUEIDENTIFIER,@serviceName NVARCHAR(255), @deploymentSlots NVARCHAR(255)) RETURNS TABLE(NameNVARCHAR(4000),DeploymentSlotNVARCHAR(255),PrivateIDNVARCHAR(255),[Status]NVARCHAR(255),LabelNVARCHAR(4000),UrlNVARCHAR(4000),ConfigurationNVARCHAR(MAX),UpgradeDomainCountINT,VMNameNVARCHAR(4000),OsVersionNVARCHAR(4000),RoleSizeNVARCHAR(255),DefaultWinRmCertificateThumbprintNVARCHAR(4000),EndpointNameNVARCHAR(4000),LocalPortINT,PortINT,ProtocolNVARCHAR(255),VipNVARCHAR(255)) AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].GetDeploymentsPersistentVMRolesWithInputEndpoints;GOCREATE PROCEDURE [Azure.Management].AddInputEndpointToPersistentVM(@certificateThumbprintNVARCHAR(255), @subscriptionIdUNIQUEIDENTIFIER,@serviceNameNVARCHAR(255), @deploymentSlotsNVARCHAR(255),@vmNameNVARCHAR(255), @EndpointNameNVARCHAR(255), @LocalPortINT, @EnableDirectServerReturnBIT, @PortINT, @ProtocolNVARCHAR(255), @VipNVARCHAR(255),@BlockingBIT= 0)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].AddInputEndpointToPersistentVM;GOCREATE PROCEDURE [Azure.Management].RemoveEndpointFromPersistentVM(@certificateThumbprintNVARCHAR(255), @subscriptionIdUNIQUEIDENTIFIER,@serviceNameNVARCHAR(255), @deploymentSlotsNVARCHAR(255),@vmNameNVARCHAR(255), @EndpointNameNVARCHAR(255),@BlockingBIT= 0)AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].RemoveEndpointFromPersistentVM;GOCREATE FUNCTION [Azure.Management].GetOperationStatus (@certificateThumbprintNVARCHAR(255), @subscriptionIdUNIQUEIDENTIFIER,@operationIdUNIQUEIDENTIFIER)RETURNS NVARCHAR(255) EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].GetOperationStatus;GOInstalling SQL Management ObjectsThe proposed cluster implementation is based on the Microsoft SQL Server to Windows Azure helper library. Install this tool according to the directions in section D, and then execute all the scripts listed in the remaining part of this section.Schema All the cluster objects are in the HAOnAzure schema.USE [HAOnAzure]GOCREATE SCHEMA [HAOnAzure];GOMetadata TablesUSE [HaOnAzure]GO/****** Object: Table [HAOnAzure].[Credential] Script Date: 12/20/2013 10:12:24 AM ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TABLE [HAOnAzure].[Credential]([AccountName] [nvarchar](255) NOT NULL,[SharedKey] [nvarchar](4000) NOT NULL,[IsMaster] BIT NOT NULL DEFAULT(0)PRIMARY KEY CLUSTERED ([AccountName] ASC) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOThe other tables are Windows Azure tables and are created on demand in the master account. Stored Procedures----------------------------------------------------------------------------------------------------------------------04-SP-HAOnAzure.PlaceDBOnHA.sql --------------------------------------------------------------------------------------------------------------------USE [HaOnAzure]GO/****** Object: StoredProcedure [HAOnAzure].[PlaceDBOnHA] Script Date: 12/20/2013 10:12:24 AM ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE PROCEDURE [HAOnAzure].[PlaceDBOnHA] @database_id INTASDECLARE @AccountName NVARCHAR(255);DECLARE @SharedKey NVARCHAR(4000);SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey FROM [HAOnAzure].[Credential] WHERE IsMaster = 1;IF @AccountName IS NULL OR @SharedKey IS NULLTHROW 53000, 'Default credential not found. Add it and try again.', 1;--------------- Check for Azure table existence ------------------IF NOT EXISTS(SELECT * FROM [Azure].ListTables(@AccountName, @SharedKey, 1, NULL) WHERE Name = 'HaOnAzureDatabase')BEGINPRINT 'Windows Azure table not found... creating'EXEC [Azure].CreateTable @AccountName, @SharedKey, 1, 'HaOnAzureDatabase';PRINT 'Windows Azure table HaOnAzureDatabase created in account ' + @AccountNameEND-------------- End Check for Azure table existence ----------------DECLARE @fn SYSNAME;IF(EXISTS(SELECT Attributes.value('(//AccountName)[1]', 'nvarchar(260)') AS AccountName, Attributes.value('(//DatabaseName)[1]', 'nvarchar(260)') AS DatabaseName, Attributes.value('(//name)[1]', 'sysname') AS [name], Attributes.value('(//type)[1]', 'tinyint') AS [type], Attributes.value('(//physical_name)[1]', 'nvarchar(260)') AS physical_nameFROM [Azure].QueryTable(@AccountName, @SharedKey, 1,'HaOnAzureDatabase', NULL)WHERE PartitionKey = DB_NAME(@database_id)))BEGINRAISERROR('Database already added', 18, 1);ENDELSE BEGINSELECT TOP 1 @fn = physical_name FROM sys.master_filesWHERE database_id = @database_idIF @fn IS NULLBEGINRAISERROR('Database not found', 18, 1);ENDELSE BEGINIF LOWER(LEFT(@fn, 4)) <> 'http'BEGINRAISERROR('Database must be SQL Server Data Files in Windows Azure', 18, 1);ENDELSE BEGINDECLARE @file_id INT;DECLARE @name SYSNAME;DECLARE @databaseName SYSNAME;DECLARE @stmt NVARCHAR(4000);DECLARE @xml XML;DECLARE curs CURSOR FAST_FORWARD FORSELECT database_id,[file_id],[name],DB_NAME(database_id) FROM sys.master_files WHERE database_id = @database_idOPEN curs;FETCH NEXT FROM curs INTO @database_id, @file_id, @name, @databaseNameWHILE @@FETCH_STATUS = 0 BEGIN SET @stmt = N'SELECT SUBSTRING(physical_name, 9, CHARINDEX(''.'', physical_name, 9)-9) AS ''AccountName'', DB_NAME( ' + CONVERT(NVARCHAR, @database_id) + N') AS ''DatabaseName'',name,[type],physical_nameFROM sys.master_filesWHERE database_id = ' + CONVERT(NVARCHAR, @database_id) +N' AND file_id = ' + CONVERT(NVARCHAR, @file_id) + N';';--PRINT @stmt;SELECT @xml = [Azure].[ToXmlStatement](@stmt);--SELECT @xml;SELECT @xml = t.c.query('.') FROM @xml.nodes('/ResultSet/Record') AS t(c);PRINT N'Adding datafile ' + @name + N' of database ' + @databaseName + N'.'EXEC [Azure].[InsertOrReplaceEntity]@AccountName, @SharedKey, 1,'HaOnAzureDatabase',@databaseName,@name,@xml;FETCH NEXT FROM curs INTO @database_id, @file_id, @name, @databaseNameEND CLOSE curs;DEALLOCATE curs;ENDENDENDGO--------------------------------------------------------------------------------------------------------------- 05-SP-HAOnAzure.ScanForDBAndMountEndpoints.sql ------------------------------------------------------------------------------------------------------------ USE [HaOnAzure]GO/****** Object: StoredProcedure [HAOnAzure].[ScanForDBAndMountEndpoints] Script Date: 12/20/2013 10:12:24 AM ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE PROCEDURE [HAOnAzure].[ScanForDBAndMountEndpoints] ASSET NOCOUNT ON;DECLARE @databaseName SYSNAME;DECLARE @serverName SYSNAME;SET @serverName = @@SERVERNAME;IF CHARINDEX('\', @serverName) > 0BEGINSET @serverName = SUBSTRING(@serverName, 0, CHARINDEX('\', @serverName));ENDSET @serverName = LOWER(@serverName);DECLARE @AccountName NVARCHAR(255);DECLARE @SharedKey NVARCHAR(4000);SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey FROM [HAOnAzure].[Credential] WHERE IsMaster = 1;IF @AccountName IS NULL OR @SharedKey IS NULLTHROW 53000, 'Default credential not found. Add it and try again.', 1;-- Load Azure endpoint table in table variableDECLARE @endpoint TABLE([CertificateThumbprint] NVARCHAR(4000) NOT NULL,[SubscriptionId] UNIQUEIDENTIFIER NOT NULL,[ServiceName] NVARCHAR(4000) NOT NULL,[DeploymentSlot] NVARCHAR(4000) NOT NULL,[DatabaseName] SYSNAME NOT NULL,[EndPointName] NVARCHAR(255) NOT NULL,[Port] INT NOT NULL,[LocalPort] INT NOT NULL,[Protocol] NVARCHAR(255) NOT NULL,[VIP] NVARCHAR(255) NOT NULL)INSERT INTO @endpointSELECTAttributes.value('(//CertificateThumbprint)[1]', 'nvarchar(4000)') AS [CertificateThumbprint], Attributes.value('(//SubscriptionId)[1]', 'UNIQUEIDENTIFIER') AS [SubscriptionId], Attributes.value('(//ServiceName)[1]', 'nvarchar(4000)') AS [ServiceName], Attributes.value('(//DeploymentSlot)[1]', 'nvarchar(4000)') AS [DeploymentSlot], PartitionKey AS [DatabaseName], RowKey AS [EndPointName], Attributes.value('(//Port)[1]', 'INT') AS [Port], Attributes.value('(//LocalPort)[1]', 'INT') AS [LocalPort], Attributes.value('(//Protocol)[1]', 'nvarchar(255)') AS [Protocol], Attributes.value('(//VIP)[1]', 'nvarchar(255)') AS [VIP]FROM [Azure].QueryTable(@AccountName, @SharedKey, 1,'HaOnAzureEndpoint', NULL);DECLARE curs CURSOR FAST_FORWARD FORSELECT [DatabaseName] FROM @endpointGROUP BY [DatabaseName];OPEN curs;FETCH NEXT FROM curs INTO @databaseNameWHILE @@FETCH_STATUS = 0 BEGIN IF(EXISTS(SELECT * FROM sys.databases WHERE name = @databaseName))BEGINPRINT 'Database ' + CONVERT(VARCHAR(255), @databaseName) + ' is owned by us. Checking the listener...';DECLARE @CertificateThumbprintNVARCHAR(4000);DECLARE @SubscriptionIdUNIQUEIDENTIFIER;DECLARE @ServiceNameNVARCHAR(4000);DECLARE @DeploymentSlotNVARCHAR(4000);DECLARE @EndPointNameNVARCHAR(255);DECLARE @PortINT;DECLARE @LocalPortINT;DECLARE @ProtocolNVARCHAR(255);DECLARE @VIPNVARCHAR(255);DECLARE @actualVMNVARCHAR(255) = NULL;SELECT TOP 1@CertificateThumbprint =CertificateThumbprint,@SubscriptionId=SubscriptionId,@ServiceName=ServiceName,@DeploymentSlot=DeploymentSlot,@EndPointName= EndPointName,@Port=Port,@LocalPort=LocalPort,@Protocol=Protocol,@VIP=VIPFROM @endpointWHERE [DatabaseName] = @databaseNameSELECT @actualVM = VMName FROM [Azure.Management].GetDeploymentsPersistentVMRolesWithInputEndpoints(@CertificateThumbprint, @SubscriptionId, @ServiceName, @DeploymentSlot)WHERE EndpointName = @EndPointName;PRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + ' is owned by ' + CONVERT(VARCHAR(255), ISNULL(@actualVM, '<nobody>')) + '.'IF ISNULL(@actualVM, '') <> @serverNameBEGINPRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + ' is not owned by the local machine (' + CONVERT(VARCHAR(255), @serverName) + ').'IF @actualVM IS NOT NULLBEGINPRINT 'Bringing down endpoint...';EXEC [Azure.Management].RemoveEndpointFromPersistentVM@CertificateThumbprint,@SubscriptionId,@ServiceName,@DeploymentSlot,@actualVM,@EndPointName,1;PRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + ' unassigned.';ENDELSE BEGINPRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + ' unassigned (not owned by anybody).';ENDSET @actualVM = @serverName;PRINT 'Mounting ' + CONVERT(VARCHAR(255), @EndPointName) + ' as local (' + @actualVM + ') endpoint (Port=' + CONVERT(VARCHAR(255), @Port) + ', LocalPort=' + CONVERT(VARCHAR(255), @LocalPort) + ')...';EXEC [Azure.Management].AddInputEndpointToPersistentVM @CertificateThumbprint,@SubscriptionId,@ServiceName,@DeploymentSlot,@actualVM,@EndPointName,@LocalPort,0,@Port,@Protocol,@VIP,1;PRINT 'Mounting done.';END ELSE BEGINPRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + ' is owned by the local machine (' + CONVERT(VARCHAR(255), @serverName) + '). Nothing to do.';ENDENDELSE BEGINPRINT 'Database ' + CONVERT(VARCHAR(255), @databaseName) + ' is owned by someone else. Ignore.';ENDFETCH NEXT FROM curs INTO @databaseNameENDCLOSE cursDEALLOCATE cursGO------------------------------------------------------------------------------------------------------------ 06-SP-HAOnAzure.ScanForDetachedDBsAndAttach.sql --------------------------------------------------------------------------------------------------------------USE [HaOnAzure]GO/****** Object: StoredProcedure [HAOnAzure].[ScanForDetachedDBsAndAttach] Script Date: 12/20/2013 10:12:24 AM ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE PROCEDURE [HAOnAzure].[ScanForDetachedDBsAndAttach] ASSET NOCOUNT ON;DECLARE @AccountName NVARCHAR(255);DECLARE @SharedKey NVARCHAR(4000);SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey FROM [HAOnAzure].[Credential] WHERE IsMaster = 1;IF @AccountName IS NULL OR @SharedKey IS NULLTHROW 53000, 'Default credential not found. Add it and try again.', 1;-- Load [Azure].QueryTable in a temp tableDECLARE @database TABLE([AccountName] NVARCHAR(260) NOT NULL,[DatabaseName] SYSNAME NOT NULL,[File] SYSNAME NOT NULL,[type] TINYINT NOT NULL,[FileName] NVARCHAR(260) NOT NULL);INSERT INTO @databaseSELECT Attributes.value('(//AccountName)[1]', 'nvarchar(260)') AS AccountName, Attributes.value('(//DatabaseName)[1]', 'SYSNAME') AS DatabaseName, Attributes.value('(//name)[1]', 'SYSNAME') AS [name], Attributes.value('(//type)[1]', 'TINYINT') AS [type], Attributes.value('(//physical_name)[1]', 'nvarchar(260)') AS physical_nameFROM [Azure].QueryTable(@AccountName, @SharedKey, 1,'HaOnAzureDatabase', NULL);DECLARE @container NVARCHAR(255);DECLARE @tbl TABLE(Url NVARCHAR(4000),LeaseStatus NVARCHAR(255));PRINT 'Retrieving blob status (by container)...'DECLARE curs CURSOR FAST_FORWARD FORSELECT CRED.AccountName, CRED.SharedKey,[Azure].GetContainerFromUri([FileName]) AS 'Container'FROM @database DB INNER JOIN [HAOnAzure].[Credential] CRED ON CRED.AccountName = DB.AccountNameGROUP BYCRED.AccountName, CRED.SharedKey,[Azure].GetContainerFromUri([FileName])OPEN curs;FETCH NEXT FROM curs INTO @AccountName, @SharedKey, @containerWHILE @@FETCH_STATUS = 0 BEGIN -- Now we should check for lease!DECLARE @cnt INTINSERT INTO @tblSELECT Url, LeaseStatus FROM [Azure].ListBlobs(@AccountName, @SharedKey, 1,@container, 0,0,0,0, NULL)FETCH NEXT FROM curs INTO @AccountName, @SharedKey, @containerEND CLOSE cursDEALLOCATE cursPRINT 'Retrieving blob status (by container) completed.'PRINT 'Looping in unlocked blobs...'DECLARE @dbName SYSNAME;DECLARE @cntUnlocked INTDECLARE @totalFiles INTDECLARE curs CURSOR FAST_FORWARD FORSELECT [DatabaseName], COUNT(*) AS 'Unlocked count' FROM @tbl BLOBSINNER JOIN @database DB ON DB.[FileName] = BLOBS.UrlWHERE [LeaseStatus] = 'Unlocked'GROUP BY [DatabaseName];OPEN curs;FETCH NEXT FROM curs INTO @dbName, @cntUnlockedWHILE @@FETCH_STATUS = 0 BEGIN SELECT @totalFiles = COUNT(*) FROM @database WHERE [DatabaseName] = @dbName GROUP BY [DatabaseName]PRINT 'Database ' + CONVERT(VARCHAR(255), @dbName) + ' has (' + CONVERT(VARCHAR(255), @cntUnlocked) + '/' + CONVERT(VARCHAR(255), @totalFiles) + ') unlocked files.'IF @cntUnlocked = @totalFilesBEGIN--------------------- Attach----------------PRINT 'Trying to attach database ' + @DBName + '...';DECLARE @txt NVARCHAR(MAX);SET @txt= N'CREATE DATABASE ' + QUOTENAME(@DBName) + ' ON ';DECLARE @fileName NVARCHAR(4000);DECLARE @first BIT = 0;--- DatafileDECLARE cDataFiles CURSOR FAST_FORWARD FORSELECT [FileName] FROM @database WHERE [type] = 0 AND DatabaseName = @DBName;OPEN cDataFiles;FETCH NEXT FROM cDataFiles INTO @fileNameWHILE @@FETCH_STATUS = 0 BEGIN IF @first = 0BEGIN SET @first = 1;END ELSE BEGINSET @txt += ', ';ENDSET @txt += '(FILENAME = ''' + @fileName + ''')'FETCH NEXT FROM cDataFiles INTO @fileNameENDCLOSE cDataFilesDEALLOCATE cDataFiles--TLogSET @txt += ' LOG ON ';SET @first = 0;DECLARE cTlog CURSOR FAST_FORWARD FORSELECT [FileName] FROM @database WHERE [type]=1 AND DatabaseName = @DBName;OPEN cTlog;FETCH NEXT FROM cTlog INTO @fileNameWHILE @@FETCH_STATUS = 0 BEGIN IF @first = 0BEGIN SET @first = 1;END ELSE BEGINSET @txt += ', ';ENDSET @txt += '(FILENAME = ''' + @fileName + ''')'FETCH NEXT FROM cTlog INTO @fileNameENDCLOSE cTlogDEALLOCATE cTlogSET @txt += ' FOR ATTACH;';PRINT 'Will call this statement "' + @txt + '"';EXEC sp_executeSQL @txt;IF(EXISTS(SELECT * FROM sys.databases WHERE [name] = @DBName))PRINT 'Attach successful!';ELSEPRINT 'Attach failed';ENDFETCH NEXT FROM curs INTO @dbName, @cntUnlockedENDCLOSE cursDEALLOCATE cursPRINT 'Looping in unlocked blobs completed.';GO-------------------------------------------------------------------------------------------------------------- 07-SP.HAOnAzure.UpsertAzureAccountCredentials.sql ----------------------------------------------------------------------------------------------------------USE [HaOnAzure]GO/****** Object: StoredProcedure [HAOnAzure].[UpsertAzureAccountCredentials] Script Date: 12/20/2013 10:12:24 AM ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE PROCEDURE [HAOnAzure].[UpsertAzureAccountCredentials] @account NVARCHAR(255), @secret NVARCHAR(4000), @isMaster BITASMERGE [HAOnAzure].[Credential] AS targetUSING (VALUES(@account, @secret, @isMaster)) AS src(AccountName, SharedKey, IsMaster)ON target.AccountName = src.AccountNameWHEN MATCHED THENUPDATE SET SharedKey = src.SharedKey, IsMaster = src.IsMasterWHEN NOT MATCHED THENINSERT (AccountName, SharedKey, IsMaster) VALUES(src.AccountName, src.SharedKey, src.IsMaster);GO-------------------------------------------------------------------------------------------------------------- 08-SP.HaOnAzure.InsertOrUpdateEndpointOnHA.sql -------------------------------------------------------------------------------------------------------------CREATE PROCEDURE [HaOnAzure].InsertOrUpdateEndpointOnHA@CertificateThumbprint NVARCHAR(255),@SubscriptionId UNIQUEIDENTIFIER, @ServiceName NVARCHAR(255), @DatabaseName SYSNAME, @EndPointName NVARCHAR(255), @Port INT, @VIP NVARCHAR(64),@LocalPort INT = 1433,@DeploymentSlot NVARCHAR(255) = 'Production', @Protocol NVARCHAR(64) = 'tcp' ASDECLARE @AccountName NVARCHAR(255);DECLARE @SharedKey NVARCHAR(4000);SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey FROM [HAOnAzure].[Credential] WHERE IsMaster = 1;IF @AccountName IS NULL OR @SharedKey IS NULLTHROW 53000, 'Default credential not found. Add it and try again.', 1;--------------- Check for Azure table existence ------------------IF NOT EXISTS(SELECT * FROM [Azure].ListTables(@AccountName, @SharedKey, 1, NULL) WHERE Name = 'HaOnAzureEndpoint')BEGINPRINT 'Windows Azure table not found... creating'EXEC [Azure].CreateTable @AccountName, @SharedKey, 1, 'HaOnAzureEndpoint';PRINT 'Windows Azure table HaOnAzureEndpoint created in account ' + @AccountNameEND-------------- End Check for Azure table existence ----------------DECLARE @xml XML;SET @xml = N'<row> ' + N'<CertificateThumbprint>' + [Azure].[ToXmlString](@CertificateThumbprint) + N'</CertificateThumbprint>' + N'<SubscriptionId>' + [Azure].[ToXmlString](@SubscriptionId) + N'</SubscriptionId>' + N'<ServiceName>' + [Azure].[ToXmlString](@ServiceName) + N'</ServiceName>' + N'<Port>' + [Azure].[ToXmlInt64](@Port) + N'</Port>' + N'<VIP>' + [Azure].[ToXmlString](@VIP) + N'</VIP>' + N'<LocalPort>' + [Azure].[ToXmlInt64](@LocalPort) + N'</LocalPort>' + N'<DeploymentSlot>' + [Azure].[ToXmlString](@DeploymentSlot) + N'</DeploymentSlot>' + N'<Protocol>' + [Azure].[ToXmlString](@Protocol) + N'</Protocol>' + N'</row>';--PRINT @stmt;SELECT @xml;EXEC [Azure].[InsertOrReplaceEntity]@AccountName, @SharedKey, 1,'HaOnAzureEndpoint', @DatabaseName,@EndPointName,@xml;GO-------------------------------------------------------------------------------------------------------------------- 09-SP.HaOnAzure.RemoveDBFromHA.sql -------------------------------------------------------------------------------------------------------------------CREATE PROCEDURE [HaOnAzure].[RemoveDBFromHA]@dbName SYSNAMEASDECLARE @AccountName NVARCHAR(255);DECLARE @SharedKey NVARCHAR(4000);DECLARE @PartitionKey NVARCHAR(260);DECLARE @RowKey NVARCHAR(260);SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey FROM [HAOnAzure].[Credential] WHERE IsMaster = 1;IF @AccountName IS NULL OR @SharedKey IS NULLTHROW 53000, 'Default credential not found. Add it and try again.', 1;DECLARE curs CURSOR FAST_FORWARD FORSELECT PartitionKey,RowKeyFROM [Azure].QueryTable(@AccountName, @SharedKey, 1,'HaOnAzureDatabase', NULL)WHEREPartitionKey = @dbName;OPEN curs;FETCH NEXT FROM curs INTO @PartitionKey, @RowKeyWHILE @@FETCH_STATUS = 0 BEGIN --PRINT @PartitionKey;EXEC [Azure].[DeleteEntity]@AccountName, @SharedKey, 1,'HaOnAzureDatabase', @PartitionKey, @RowKey;FETCH NEXT FROM curs INTO @PartitionKey, @RowKeyENDCLOSE curs;DEALLOCATE curs;GO----------------------------------------------------------------------------------------------------------------10-SP.HaOnAzure.RemoveEndpointFromHA.sql ------------------------------------------------------------------------------------------------------------------CREATE PROCEDURE [HaOnAzure].[RemoveEndpointFromHA]@dbName SYSNAME,@endpointName NVARCHAR(260)ASDECLARE @AccountName NVARCHAR(255);DECLARE @SharedKey NVARCHAR(4000);DECLARE @PartitionKey NVARCHAR(260);DECLARE @RowKey NVARCHAR(260);SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey FROM [HAOnAzure].[Credential] WHERE IsMaster = 1;IF @AccountName IS NULL OR @SharedKey IS NULLTHROW 53000, 'Default credential not found. Add it and try again.', 1;EXEC [Azure].[DeleteEntity]@AccountName, @SharedKey, 1,'HaOnAzureEndpoint', @dbName, @endpointName;GO------------------------------------------------------------------------------------------------------------------ 11-SP.HaOnAzure.ShowEndpoints.sql ----------------------------------------------------------------------------------------------------------------------CREATE PROCEDURE [HaOnAzure].[ShowEndpoints] ASSET NOCOUNT ON;DECLARE @AccountName NVARCHAR(255);DECLARE @SharedKey NVARCHAR(4000);SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey FROM [HAOnAzure].[Credential] WHERE IsMaster = 1;IF @AccountName IS NULL OR @SharedKey IS NULLTHROW 53000, 'Default credential not found. Add it and try again.', 1;SELECTAttributes.value('(//CertificateThumbprint)[1]', 'nvarchar(4000)') AS [CertificateThumbprint], Attributes.value('(//SubscriptionId)[1]', 'UNIQUEIDENTIFIER') AS [SubscriptionId], Attributes.value('(//ServiceName)[1]', 'nvarchar(4000)') AS [ServiceName], Attributes.value('(//DeploymentSlot)[1]', 'nvarchar(4000)') AS [DeploymentSlot], PartitionKey AS [DatabaseName], RowKey AS [EndPointName], Attributes.value('(//Port)[1]', 'INT') AS [Port], Attributes.value('(//LocalPort)[1]', 'INT') AS [LocalPort], Attributes.value('(//Protocol)[1]', 'nvarchar(255)') AS [Protocol], Attributes.value('(//VIP)[1]', 'nvarchar(255)') AS [VIP]FROM [Azure].QueryTable(@AccountName, @SharedKey, 1,'HaOnAzureEndpoint', NULL);GO------------------------------------------------------------------------------------------------------------------ 12-SP.HAOnAzure.ShowDatabasesOnHA.sql ------------------------------------------------------------------------------------------------------------------USE [HaOnAzure]GO/****** Object: StoredProcedure [HAOnAzure].[ShowDatabasesOnHA] Script Date: 12/27/2013 2:44:19 PM ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE PROCEDURE [HAOnAzure].[ShowDatabasesOnHA] ASSET NOCOUNT ON;DECLARE @AccountName NVARCHAR(255);DECLARE @SharedKey NVARCHAR(4000);SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey FROM [HAOnAzure].[Credential] WHERE IsMaster = 1;IF @AccountName IS NULL OR @SharedKey IS NULLTHROW 53000, 'Default credential not found. Add it and try again.', 1;SELECT Attributes.value('(//AccountName)[1]', 'nvarchar(260)') AS AccountName, Attributes.value('(//DatabaseName)[1]', 'nvarchar(260)') AS DatabaseName, Attributes.value('(//name)[1]', 'sysname') AS [name], Attributes.value('(//type)[1]', 'tinyint') AS [type], Attributes.value('(//physical_name)[1]', 'nvarchar(260)') AS physical_nameFROM [Azure].QueryTable(@AccountName, @SharedKey, 1,'HaOnAzureDatabase', NULL);GOStored procedure reference[HAOnAzure].[PlaceDBOnHA]DescriptionThis stored procedure adds the mandatory metadata to the Windows Azure database table. After a database is in that table, all cluster instances monitor its lease status. If it is unlocked, the instances try to attach it as local database.ParametersNameTypeDescription@database_idINTThe database ID of the database to add. The database ID can be found in the sys.databases catalog view. Only SQL Server Data Files in Windows Azure databases can be added.Returns Nothing.[HAOnAzure].[ScanForDBAndMountEndpoints]DescriptionThis stored procedure scans all the SQL Server Data Files in Windows Azure databases attached to the local instance and in the Windows Azure database table. For each database found, the stored procedure checks to see whether the corresponding endpoint is mapped to the local instance. If not, the procedure maps the endpoint to the local instance, bringing down the preexisting map if needed.ParametersNothing.Returns Nothing.RemarksA master credential must exist in the credential table. If none exists, the procedure throws an error. To add the required master credential, use the stored procedure REF _Ref376610198 \h \* MERGEFORMAT [HAOnAzure].[UpsertAzureAccountCredentials].[HAOnAzure].[ScanForDetachedDBsAndAttach]DescriptionThis stored procedure scans all the databases in the Windows Azure database table for unlocked blobs. If a database is found with all its blobs unlocked, the procedure attempts to attach it as local database.ParametersNothing.Returns Nothing.RemarksA master credential must exist in the credential table. If none exists, the procedure throws an error. To add the required master credential, use the stored procedure REF _Ref376610198 \h \* MERGEFORMAT [HAOnAzure].[UpsertAzureAccountCredentials].[HAOnAzure].[UpsertAzureAccountCredentials]DescriptionThis stored procedure adds or updates the Windows Azure storage credentials. You can add as many credentials as you want to support multiple different accounts. One (no more or less) must be marked as master. The master account is used for the shared Windows Azure metadata table by the framework.ParametersNameTypeDescription@accountNVARCHAR(255)This is the Windows Azure account name. @secretNVARCHAR(4000)This is one of the shared keys of the Windows Azure credential. @isMasterBITSet this to true for the master account. Only one master account is supported at a time.Returns Nothing.[HaOnAzure].[InsertOrUpdateEndpointOnHA]DescriptionThis stored procedure adds or updates an endpoint mapping. ParametersNameTypeDescription@CertificateThumbprintNVARCHAR(255)This is the certificate thumbprint that identifies the Windows Azure management certificate. For more information, see Windows Azure management certificate” earlier in this document.@SubscriptionIdUNIQUEIDENTIFIERThis is the Windows Azure subscription ID. For more information, see the remarks section. @ServiceNameNVARCHAR(255)This is the Windows Azure cloud service name. This is also the DNS name of your cluster. For more information, see “Virtual machines in the same Windows Azure cloud service” earlier in this document.@DatabaseNameSYSNAMEThis is the database name. One database can have one endpoint.@EndPointNameNVARCHAR(255)This is the endpoint name. Any unique name can be used; however, standard Windows Azure endpoint naming constraints apply.@PortINTThis is the remote port (the port to connect to from client perspective). Each port must be unique and available (not used by other services). For more information, see “Endpoint table” earlier in this document.@VIPNVARCHAR(64)The VIP of the cloud service.@LocalPortINTThe TCP port of SQL Server. This can be shared among different external ports. For more information, see “Endpoint table” earlier in this document.@DeploymentSlotNVARCHAR(255)This is the Windows Azure deployment slot. For more information about Windows Azure deployments, see “Manage Deployments in Windows Azure” ().@ProtocolNVARCHAR(64)For SQL Server TDS specify “tcp”.Returns Nothing.RemarksYou can retrieve the subscription ID through the Windows Azure Management Portal by clicking Settings.Figure 41 - How to find the Windows Azure Subscription ID using the management portalA master credential must exist in the credential table. If none exists, the procedure throws an error. To add the required master credential, use the stored procedure REF _Ref376610198 \h \* MERGEFORMAT [HAOnAzure].[UpsertAzureAccountCredentials].[HaOnAzure].[RemoveDBFromHA]DescriptionThis stored removes a database from the scan that is performed each time the stored procedure REF _Ref376610826 \h \* MERGEFORMAT [HAOnAzure].[ScanForDetachedDBsAndAttach] is called. This action effectively removes the database from the cluster.ParametersNameTypeDescription@dbNameSYSNAMEThis database name.Returns Nothing.RemarksA master credential must exist in the credential table. If none exists, the procedure throws an error. To add the required master credential, use the stored procedure REF _Ref376610198 \h \* MERGEFORMAT [HAOnAzure].[UpsertAzureAccountCredentials].[HaOnAzure].[RemoveEndpointFromHA]DescriptionThis stored procedure removes the endpoint from the scan that is performed each time the stored procedure REF _Ref376610966 \h \* MERGEFORMAT [HAOnAzure].[ScanForDBAndMountEndpoints] is called. This operation does not remove the endpoint from the Windows Azure deployment. If you need to remove the endpoint, you must use Windows Azure PowerShell cmdlets or the stored procedure [Azure.Management].[RemoveEndpointFromPersistentVM]. For more information about using cmdlets, see “Windows Azure PowerShell” ().ParametersNameTypeDescription@dbNameSYSNAMEThis is the database name.@endpointNameNVARCHAR(260)This is the endpoint name.Returns Nothing.RemarksA master credential must exist in the credential table. If none exists, the procedure throws an error. To add the required master credential, use the stored procedure REF _Ref376610198 \h \* MERGEFORMAT [HAOnAzure].[UpsertAzureAccountCredentials].[HaOnAzure].[ShowEndpoints]DescriptionThis stored procedure lists the endpoints configured using the stored procedure REF _Ref376611384 \h \* MERGEFORMAT [HaOnAzure].[InsertOrUpdateEndpointOnHA]. This is not the actual deployment, just the configured one. If you need to inspect the actual endpoint configuration, use the Windows Azure PowerShell cmdlets or use the stored procedure [Azure.Management].[GetDeploymentsPersistentVMRolesWithInputEndpoints]. For more information about the cmdlets, see “Windows Azure PowerShell” ).ParametersNothing.Returns NameTypeDescription@CertificateThumbprintNVARCHAR(255)This is the certificate thumbprint that identifies the Windows Azure management certificate. For more information, see “Windows Azure management certificate” earlier in this document.@SubscriptionIdUNIQUEIDENTIFIERThis is the Windows Azure subscription ID. For more information, see the remarks section of the section that describes the stored procedure REF _Ref376611384 \h [HaOnAzure].[InsertOrUpdateEndpointOnHA].@ServiceNameNVARCHAR(255)This is the Windows Azure cloud service name. This is also the DNS name of your cluster. For more information, see “Virtual machines in the same Windows Azure cloud service” earlier in this document.@DeploymentSlotNVARCHAR(255)This is the Windows Azure deployment slot. For more information about Windows Azure deployments, see “Manage Deployments in Windows Azure” ().@DatabaseNameSYSNAMEThis is the database name. One database can have one endpoint.@EndPointNameNVARCHAR(255)This is the endpoint name. Any unique name can be used; however, standard Windows Azure endpoint naming constraints apply.@PortINTThis is the remote port (that is, the port to connect to from the client perspective). Each port must be unique and available (not used by other services). For more information, see “Endpoint table” earlier in this document.@LocalPortINTThe TCP port of SQL Server. This can be shared among different external ports. For more information, see “Endpoint table” earlier in this document.@ProtocolNVARCHAR(64)For SQL Server TDS specify “tcp”.@VIPNVARCHAR(64)The VIP of the cloud service.RemarksA master credential must exist in the credential table. If none exists, the procedure throws an error. To add the required master credential, use the stored procedure REF _Ref376610198 \h \* MERGEFORMAT [HAOnAzure].[UpsertAzureAccountCredentials].[HAOnAzure].[ShowDatabasesOnHA]DescriptionThis stored procedure shows the databases added to the cluster scan that is performed each time the stored procedure REF _Ref376610826 \h \* MERGEFORMAT [HAOnAzure].[ScanForDetachedDBsAndAttach] is called. It also returns a list of the physical files (blobs) that make up the database.ParametersNothing.Returns NameTypeDescriptionAccountNameNVARCHAR(260)This is the Windows Azure account name.DatabaseNameNVARCHAR(260)This is the database name.nameSYSNAMEThe name of the file as it appears in the sys.master_files name field.typeTINYINTThis is the type of the file as it appears in the sys.master_files type field.physical_nameNVARCHAR(260)This is the physical URI of file as it appears in the sys.master_files physical_name field.RemarksA master credential must exist in the credential table. If none exists, the procedure throws an error. To add the required master credential, use the stored procedure REF _Ref376610198 \h \* MERGEFORMAT [HAOnAzure].[UpsertAzureAccountCredentials]. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download