Building your first Microsoft Azure Search Application



Modern Data Warehouse – Azure HDInsight PPE TrainingContents TOC \o "1-1" \h \z \u Azure account required for lab PAGEREF _Toc408906859 \h 3Tracking Hurricane Sandy PAGEREF _Toc408906860 \h 3Using Hive PAGEREF _Toc408906861 \h 13Loading Data PAGEREF _Toc408906862 \h 22Introduction to Pig PAGEREF _Toc408906863 \h 25Moving Data: Use Sqoop to copy to SQL Azure PAGEREF _Toc408906864 \h 34Power BI: Using Big Data PAGEREF _Toc408906865 \h 42Elastic Processing with PowerShell PAGEREF _Toc408906866 \h 53Roll back Azure changes PAGEREF _Toc408906867 \h 56Terms of use PAGEREF _Toc408906868 \h 57Azure account required for labEstimated time to complete lab is 40-45 minutes.While carrying all of the exercises within this hands on lab you will either be making use of the Azure portal at or the Preview portal at . To perform this lab, you will require a Microsoft Azure account.If you do not have an Azure account, you can request a free trial version by going to . Within the one-month trial version, you can perform other SQL Server 2014 hands on labs along with other labs available on Azure. Note, to sign up for a free trial, you will need a mobile device that can receive text messages and a valid credit card.Be sure to follow the Roll back Azure changes section at the end of this exercise after creating the Azure database so that you can make the most use of your $200 free Azure credit.Tracking Hurricane SandyConnect to SQLONE computerClick on SQL2014DEMO-SQLO… button on right side of the screen to connect to the SQLONE computer. If you see the following in the lower right corner of the screen, you can jump to step 5 below to set your screen resolution.Click Send Ctrl-Alt-Del for SQLONE computer and then click Switch user.Click Send Ctrl-Alt-Del for SQLONE computer again and then click Other user.Log on to SQLONE computer as labuser with password pass@word1Note, if you have a monitor that supports a larger screen resolution than 1024 x 768, you can change the screen resolution for the lab to go as high as 1920 x 1080. By going to a higher screen resolution, it will be easier to use SQL Server Management Studio.Right click on the desktop and click on Screen resolution.Select 1366 x 786 (a good minimum screen size for using SSMS) and click OK.Click Keep Changes.Resize the client holLaunchPad Online window for the lab to fit your screen resolution.During the setup you will need to record credentials and server locations. Open notepad.exe to keep track of informationCreate a storage account for your clusterLog into the Azure Management Portal at + NEW, DATA SERVICES, STORAGE, QUICK CREATE and complete the storage account information as follows:URL – Use your Microsoft ID without the @ domain.LOCATION/AFFINITY GROUP – Use North Central USSUBSCRIPTION – Use defaultREPLICATION – Locally RedundantClick CREATE STORAGE ACCOUNT checkmark to create the account.Create your HDInsight clusterClick + NEW, DATA SERVICES, HDINSIGHT, CUSTOM CREATE.For the Cluster Details page, enter in the following values:CLUSTER NAME – Use the same name as your storage account.SUBSCRIPTION NAME – Use the defaultCLUSTER TYPE – Select HBaseHDINSIGHT VERSION – Use the defaultClick arrow to go to the next page.For the Configure Cluster page, use the following parameters:DATA NODES – Use 1. Note, for some trial accounts, you might be limited to the value of 1 for the number of data nodes.REGION/VIRTUAL NETWORK – Use North Central USClick right arrow to go to the next page.For Configure Cluster User, enter in the following parameters:USER NAME – Use the same name as your storage account.PASSWORD/CONFIRM PASSWORD – Use Pass@word12Click right arrow to go to the next page.For the Storage Account page, use the following parameters:STORAGE ACCOUNT – Select Use Existing StorageACCOUNT NAME – Select your storage account nameClick right arrow to go to the next page.For the Script Actions page, click the checkmark to complete the new cluster request.You can view the status of your cluster operation by clicking on the DETAILS icon at the bottom of the portal page.Your cluster should be created within 15 minutes.Upload US Census to your storage accountThis lab uses the US Census data from 2010 that is available at . A modified version of the data files is located on the lab system in the C:\SQLSCRIPTS\BigData\Demo files directory. In this step, you will use the Azure Storage Explorer to upload these files into your storage account.Click on the Storage page in the portal and then click on your new storage account to view the getting started page.Click on MANAGE ACCESS KEYS at the bottom of the page.In the Manage Access Keys, click on the Copy icon next to the PRIMARY ACCESS KEY value.Close the dialog by clicking on the checkmark.Click on the Azure Storage Explorer link on the desktop or on the task bar.Click Add AccountIn the Add Storage Account dialog, enter in the Storage account name and the Storage account key values and then click Save.Click on the New icon to create a new container.For the Blob Container Name, use censusdata and click Create.Click on the newly created censusdata folder. Then, click on the Upload button.In the Open dialog, browse to C:\SQLSCRIPTS\BigData\Demo?files and then select both STCO-MR2010_MT_WY.csv and STCO-MR2010_AL_MO.csv files and click Open.Once the upload is complete, you will see the two files in the display.Click on the New container button and create a new container named stormtrack.Click on the stormtrack folder and then Upload the SandyStormTrack.csv file from the C:\SQLSCRIPTS\BigData\Demo?files directory.Close Azure Storage Explorer.Using HiveCreate Hive table for census dataIn this section, you will use Visual Studio to connect to your HDInsight cluster, create Hive tables and perform queries against Hive.Open Visual Studio using the toolbar icon.Click Sign in and use the Microsoft account you used to connect to the Azure Portal. Use the default settings and click Start Visual Studio.Click on Server Explorer and pin it.Click on the expand button for HDInsight and then log in with your Microsoft ID.Once the login is complete, expand out the folders under HDInsight to see the cluster resources.Press Ctrl+O to open Create?CensusData?Hive?table.hql in the C:\SQLSCRIPTS\BigData\Demo?files directory.Replace <Your?storage?account?name?here> on line 15 with your storage account name.Click the Submit button in the toolbar to run the statements in the file.The first statement drops the table if it already exists. The CREATE EXTERNAL TABLE statement provides metadata around the two csv files in the censusdata container. Notice, there is no need to load any data into the Hive table. It uses any files in the directory as is, so you don’t want to have files with different structures under the same container path. Because this is an external table, when you issue a drop statement, the data stays in its location.The third statement performs a count operation to display the number of rows in the table.Once you submit the job, Visual Studio displays the HDInsight Task list to show you that the task was submitted. In addition, Visual Studio displays the Hive Job Summary to show the actual status of the execution.It takes approximately two minutes to complete execution. Click on the Refresh button until you see that the Job Status is Completed.Click on the Job Output link to see the count of rows.Create the StormTrack tablePress Ctrl+O to open Create?StormTrack?Hive?table.hql in the C:\SQLSCRIPTS\BigData\Demo?files directory.Replace <Your?storage?account?name?here> on line 11 with your storage account name.Click the Submit button in the toolbar to run the statements in the file. Then wait about 2 minutes to click Refresh or until the Job Status is Completed.Click on the Job Output link to see the results.Group By query against CensusDataIn this example, you will see how Hive supports a group by query where the end result will consolidate the race and sex information to age groups by state and country.Press Ctrl+O to open CensusDataGroupByExample.hql in the C:\SQLSCRIPTS\BigData\Demo?files directory. Then click Submit to execute the statement.In this example, you will notice on line 1 that the statement tells Hive to use the Tez engine for executing the query. For more information about Tez, see . The net result is faster execution speed.The Group By statement looks much like Transact-SQL.When the Job Status is Completed, click on the Job Output link to show the results.Using Create Table As Select syntax to summarize the dataPress Ctrl+O to open CTASGroupByExample.hql in the C:\SQLSCRIPTS\BigData\Demo?files directory. Then click Submit to execute the statement.The Create Table As Select syntax only creates data within the Hive warehouse. When you drop the table, the data goes with it. The data is available for tools like Power Query in Excel by navigating the container used for the HDInsight cluster and going to the /hive/warehouse/atriskpopulation folder. The data is located in the 000000_0 file. Depending on the size of the result, the data could be spread across more than one file.It is important to use an alias for the SUM expression so that the resulting table has a column name that is understandable. For the SELECT statement after the table is created, notice the ORDER BY syntax can reference the project list alias totpop. This is a Hive feature. Hive does not support the use of column numbers for the ORDER BY clause like Transact-SQL.Click on the Job Output link to see the results.Click on Job Log to view the job information. You will see at the end of the first Map Reducer sequence where Hive posted the data.SummaryYou can now use the data in the Hive tables with Excel Power Query or the Microsoft Hive ODBC driver for visualizing the data. This will be covered later in this lab.Loading DataUsing Visual Studio to load dataIn this demonstration, you will see how you can use Visual Studio HDInsight tools to upload data to your storage account for processing later on with Apache Pig.If you closed Visual Studio, reopen it from the toolbar. Then go to the Server Explorer.Expand the Azure folder > Storage and then your storage account used for the HDInsight cluster.Right click on Blobs and click Create Blog Container.Enter in pig-weblogs for the container name and click OK. This will be the location where you will upload the sample.log file.Double click pig-weblogs folder to display the Container page.Click the Upload blob command in the Container toolbar.For File name, click Browse and open C:\data\sample.log. For the optional Folder name, logs. Click OK to start the upload. When complete, Visual Studio displays the folder with the file you just uploaded.The HDInsight developer tools with Visual Studio 2013 provides all of the essential features needed to upload, download, view and delete blobs in your Azure storage account. Introduction to PigIdentify data to analyze with PigIn this lab, you will use Apache Pig jobs on HDInsight to analyze large data files.HDInsight uses Azure Blob storage container as the default file system for Hadoop clusters. Some sample data files are added to the blob storage as part of cluster provisioning. You can use these sample data files for running Hive queries on the cluster. If you want, you can also upload your own data file to the blob storage account associated with the cluster. See?Upload data to HDInsight?for instructions. For more information on how Azure Blob storage is used with HDInsight, see?Use Azure Blob storage with HDInsight.The syntax to access the files in the blob storage is:wasb[s]://<ContainerName>@<StorageAccountName>.blob.core.<path>/<filename>NOTE: Only the?wasb://?syntax is supported in HDInsight cluster version 3.0. The older?asv://?syntax is supported in HDInsight 2.1 and 1.6 clusters, but it is not supported in HDInsight 3.0 clusters and it will not be supported in later versions.A file stored in the default file system container can be accessed from HDInsight using any of the following URIs as well (using sample.log as an example. This file is the data file used in this lab):wasb://mycontainer@mystorageaccount.blob.core.example/data/sample.logwasb:///example/data/sample.log/example/data/sample.logIf you want to access the file directly from the storage account, the blob name for the file is:example/data/sample.logThis article uses a?log4j?sample file that comes with HDInsight clusters and is stored at?\example\data\sample.log. For information on uploading your own data files, see?Upload data to HDInsight.Understand Pig LatinIn this section, you will review some Pig Latin statements individually, and their results after running the statements. In the next section, you will run PowerShell to execute the Pig statements together for analyzing the sample log file. The individual Pig Latin statements must be run directly on the HDInsight cluster.Enable Remote Desktop for the HDInsight cluster by following the instructions at?Connect to HDInsight clusters using RDP. Log in to the cluster node and from the desktop, click?Hadoop Command Line.From the command line, navigate to the directory where?Pig?is installed. Type:C:\apps\dist\hadoop-<version>> cd %pig_home%\binAt the command prompt, type?pig?and press ENTER to get to the?grunt?shell.Enter the following to load data from the sample file in the file system, and then display the results: NOTE: See the text file at C:\SQLSCRIPTS\BigData\AzureHDInsightPig\HDCommandLine-Pig.txt for all of the following command line commandsgrunt> LOGS = LOAD 'wasb:///example/data/sample.log';grunt> DUMP LOGS;The output is similar to the following:Go through each line in the data file to find a match on the 6 log levels:grunt> LEVELS = foreach LOGS generate REGEX_EXTRACT($0, '(TRACE|DEBUG|INFO|WARN|ERROR|FATAL)', 1) as LOGLEVEL;Filter out the rows that do not have a match and display the result. This gets rid of the empty rows.grunt> FILTEREDLEVELS = FILTER LEVELS by LOGLEVEL is not null;grunt> DUMP FILTEREDLEVELS;The output is similar to the following:Group all of the log levels into their own row and display the result:grunt> GROUPEDLEVELS = GROUP FILTEREDLEVELS by LOGLEVEL;grunt> DUMP GROUPEDLEVELS;The output is similar to the following:For each group, count the occurrences of log levels and display the result:grunt> FREQUENCIES = foreach GROUPEDLEVELS generate group as LOGLEVEL, COUNT(FILTEREDLEVELS.LOGLEVEL) as COUNT;grunt> DUMP FREQUENCIES;The output is similar to the following:Sort the frequencies in descending order and display the result:grunt> RESULT = order FREQUENCIES by COUNT desc;grunt> DUMP RESULT; The output is similar to the following:Submit Pig jobs using PowerShellThis lab provides instructions for using PowerShell cmdlets. Before you go through this section, you must first setup the local environment, and configure the connection to Azure. For details, see?Get started with Azure HDInsight?and?Administer HDInsight using PowerShell.To run Pig Latin using PowerShell:Open Windows PowerShell ISE. On Windows 8 Start screen, type?PowerShell_ISE?and then click?Windows PowerShell ISE. See?Start Windows PowerShell on Windows 8 and Windows?for more information.In the bottom pane, run the following command to connect to your Azure subscription:Add-AzureAccountYou will be prompted to enter your Azure account credentials. This method of adding a subscription connection times out, and after 12 hours, you will have to run the cmdlet again.NOTE: If you have multiple Azure subscriptions and the default subscription is not the one you want to use, use the?Select-AzureSubscription?cmdlet to select the current subscription.In the script pane (top pane), copy and paste the following lines, replacing <StorageAccountName> with your Azure blob storage account name, and <HDInsightClusterName> with your cluster name:NOTE: The following commands can be found at C:\SQLSCRIPTS\BigData\AzureHDInsightPig\HDInsight-Pig.ps1If the status folder you specify does not already exist, the script will create it.Append the following lines in the script pane. These lines define the Pig Latin query string, and create a Pig job definition:You can also use the?-File?switch to specify a Pig script file on HDFS. The?-StatusFolder?switch puts the standard error log and the standard output file into the folder.Append the following lines for submitting the Pig job:Append the following lines for waiting for the Pig job to complete:Append the following lines to print the Pig job output:NOTE: One of the Get-AzureHDInsightJobOutput cmdlets is commented for shortening the output in the following screenshot.Press?F5?to run the script:7681733763The Pig job calculates the frequencies of different log types.Using SQOOP to copy Hive data to Azure SQL DatabaseCreate a SQL Azure Database.In this lab, you will create a SQL Azure Database and table that we will use to populate with census data.Open IE and sign-in to your Windows Azure management portal for your subscription. Click here to goto the sign-in page.Navigate to the SQL Database page by selecting SQL Databases on the left navigation strip. Select +New at the bottom of the page.Click Custom Create. Type MyTestDatabase as the DB name. In the Server dropdown, select New SQL Database Server. Click the next arrow at the bottom of the wizard.Type Admin1 as the Adminstrator ID, and P@ssword1 as the password. You can select any region to place your database---these choices reflect the Windows Azure Data Center locations that are available. Choose the same one used for your HDInsight cluster. Click the check mark to complete this step.We will now add firewall rules to allow client machines outside of the Azure data center to access the server. Click on the Server link at the top of the Azure SQL Database management page.Next click on the assigned server name (e.g. ‘MyTestServer’) in the portal. Then click on the Configure link. Here you can enter any number of firewall rules to grant access to client IPs outside the Azure Data Center to access your database. This might include applications, or management tools such as SQL Server Management Studio and/or Visual Studio running within your data center. For the lab, we will grant a blanket rule to allow a large range of IP addresses to access the database. Type a rule name of All, and then enter a starting IP address of 0.0.0.0 and an ending address of 254.254.254.254. Click on the Save icon at the bottom of the page.Create a new table using Visual StudioNow, we will create a SQL Table on our new SQL Azure Database. Open Visual Studio and view the Server Explorer. Expand Azure and then SQL Databases and right click on the name of the database you just created.Select Open in SQL Server Object Explorer.7691721889194Using the File menu, open the C:\SQLSCRIPTS\BigData\Demo files folder and select the CreateCensusTable.sql file.Connect the script to the database by clicking the Change Connection icon, then run the script.13512803974461990933377334This creates the dbo.AtRiskPopulation table which matches the structure of the Hive table that contains the census data. Note, that the table has a primary key so that you can insert data into the table. Sqoop with PowerShell to copy to a SQL Azure databaseIn this section, we will run a Sqoop job using PowerShell to copy data into a SQL Azure databaseOpen Windows PowerShell ISE.Navigate to C:/SQLSCRIPTS/BigData/Demo files and open Sqoop-Azure.ps1.Change these connection variables to match the values in your environment.Set the Azure connection string variables.Notice that the $hivetabledir points to the location for the part* files and not the actual file. This is because there may be a partition and bucket directory structure under the Hive table path. Sqoop will process all the files recursively under the path.To create the $sqoopcmd variable with all of the Sqoop command parameters, select the following commands and press F8.To create the Sqoop Job Definition command, select the following commands and press F8. To run the Sqoop map-reduce job, you perform the task just like a regular map-reduce job. Select the following commands and press F8.For a quick validation of the data import operation, go to Visual Studio and open the SQL Server Object Explorer, navigate to the table, right click on it, and select View Data.Using PowerShell to validate the resultsYou can also use PowerShell and the Invoke-Sqlcmd command to validate the results.Select the following commands in the Windows PowerShell ISE and press F8 to show the results of a select query.Using Power BI for Big DataConnect Excel to Hadoop with the Microsoft Hive ODBC DriverDownload and install Microsoft Hive ODBC Driver from the?Download Center. Install both the 32-bit and the 64-bit versions and install them separately.From Windows 8, press the Windows key to open the Start screen, and then type?data sources.Click?Set up ODBC Data sources (32-bit)?or?Set up ODBC Data Sources (64-bit)?depending on your Office version. If you are using Windows 7, choose ODBC Data Sources (32 bit)?or?ODBC Data Sources (64 bit)?from?Administrative Tools. This will launch the?ODBC Data Source Administrator?dialog.From System DNS, click?Add?to open the?Create New Data Source?wizard.Select?Microsoft Hive ODBC Driver, and then click?Finish. This will launch the?Microsoft Hive ODBC Driver DNS Setup?dialog.Type or select the following values:Data Source Name: <Provide the name of your HDInsight cluster>Host: <Data Source Name>.Port: 443Database: DefaultHive Server Type: Hive Server 2Mechanism: Azure HDInsight ServiceHTTP Path: <leave blank>User Name: <your HDInsight cluster user name>Password: <your HDInsight cluster password>Click Advanced Options to set some more parameters:Rows fetched per block: 10000Default string column length: 65536Binary column length: 32767Decimal column scale: 10Async Exec Poll Interval (ms): 100Get Tables with Query: checkClick?Test?to test the data source. When the data source is configured correctly, it shows?TESTS COMPLETED SUCCESSFULLY!.Click?OK?to close the Test dialog. The new data source should now be listed on the?ODBC Data Source Administrator.Click?OK?to exit the wizard.Import data into Excel from an HDInsight clusterThe following steps describe the way to import data from a hive table into an Excel workbook using the ODBC data source that you created in the steps above.Open a new or existing workbook in Excel.From the?Data?tab, click?From Other Data Source, and then click?From Data Connection Wizard?to launch the?Data Connection Wizard.Select?ODBC DSN?as the data source, and then click?Next.From ODBC data sources, select the data source name that you created in the previous step, and then click?Next.Re-enter the password for the cluster in the wizard, and then click?Test?to verify the configurationClick?OK?to close the test dialog.Click?OK. Wait for the?Select Database and Table?dialog to open. This can take a few seconds.Select the atriskpopulation table that was created in an earlier step.Click?Finish.In the?Import Data?dialog, you can change or specify the query. To do so, click?Properties. This can take a few seconds.Click on the?Definition?tab to view the Command Text to see the query string. Leave the query as is for now. It will be modified in the next section.Click OK and reenter the cluster password. It will take a few moments for Excel to execute the query.Modify the query to be executed in ExcelWith the Hive table now opened in Excel, you can modify the query to return the results in different ways.On the Data tab, click Connections.Highlight the HIVE atriskpopulation connection and click Properties.Select the Definition tab to view the query properties.In the Command text box, type the following query (the query is also saved at C:\SQLSCRIPTS\BigData\Demo files\InfantsByState.txt):SELECT stname, SUM(pop) AS infantsFROM HIVE.default.atriskpopulationWHERE (agegrp = '1')GROUP BY stnameORDER BY infants DESCClick OK and wait a few seconds for the query to execute. It will return records such as the following:Note that Hive Query Language is very similar to T-SQL. One difference is that you can use a field alias in the Order By clause.Use the Hive ODBC driver in PowerPivot to load data thereNow, we will mirror this process using PowerPivot and then create a report in the next section.Open a new Excel spreadsheet.Select the PowerPivot tab and click Manage.In the Get External Data section, click From Other Sources and select Others (OLDEB/ODBC).11456901360070Click Next. Then click the Build button.Select the Provider tab and click on Microsoft OLE DB Provider for ODBC Drivers. Click Next.On the Use data source name list, select the data source you created in the prior steps and provide your user name and password. Check the Allow saving password option. Select or type in HIVE to Enter the initial catalog to use. Test the Connection.Click OK and Next.Choose the option to Write a query that will specify the data to import. Type in or paste the same SQL Statement that was used in the prior section, name it “InfantsByState” and Validate.137731667385After a few moments the query will finish executing. You can now switch back to the workbook by clicking the button in the upper left.Using the Data in PowerPivot, Create a Data Visualization.On the Insert tab, select Map to launch Power Map.Under Choose Geography, check the stname field. Click Next.Select the infants checkbox and change the visualization to Bubble. Zoom in and exit out of the Tour pane on the left.36045801554395With just a few clicks it is possible to create a nice visualization of the data with Power Map or Power View.Elastic Processing with PowerShellInitial setup for the exerciseIf you are using the free trial of HDInsight, you will need to remove the HDInsight Cluster, since you can only have one under that license.You can remove your cluster with the following command in PowerShell$clustername = “<your cluster name>”Remove-AzureHDInsightCluster $clusternamePowerShell IntegrationIn this lab, you will learn to control and automate the deployment and management of your workloads in Windows Azure using Windows Azure PowerShell.Open Windows PowerShell ISE and navigate to C:/SQLSCRIPTS/BigData/Demo files and open PowerShellAutomation.ps1.Change the variables in <> to the values you want to use for the new cluster. Once you have changed the variables, press Ctrl+S to save the PowerShell script. You can then choose to run each section of code using F8 to execute the selection or you can run everything using the F5 Run Script command. The rest of the steps below show the major portions of the script.Create the storage account if it does not exist.If a cluster with the same name already exists, delete it.Now create the new cluster. This will take about 10-15 minutes.Copy the weblog for the Pig sample from the file system to the Blob container.Setup the Pig job and the query.Submit the Pig job, wait for it to complete download the results to the local computer and display results.Remove the newly created HDInsight cluster.At the end of the run, you see the results of the Pig job displayed from the file that was downloaded from the “tutorials/usepig/status/stdout” directory in your $containername.Next stepsYou can extend the automated customization of an HDInsight cluster with the use of custom script actions that were made available December 18 2014. For example, if you want to configure the HDInsight cluster to install the R open source project, you can follow the procedures at . This example uses the new Add-AzureHDInsightScriptAction cmdlet to use a customization script located at . Roll back Azure changesLet’s clean up the assets we have used during this hands on lab. Here are the items which should be deleted from your subscription using the Azure Management Portal.Delete the SQL Database created for the SQOOP job.Delete the storage account used for your data files and HDInsight cluster.You can now exist the lab environment.Terms of use ? 2015 Microsoft Corporation. All rights reserved.By using this Hands-on Lab, you agree to the following terms:The technology/functionality described in this Hands-on Lab is provided by Microsoft Corporation in a “sandbox” testing environment for purposes of obtaining your feedback and to provide you with a learning experience. You may only use the Hands-on Lab to evaluate such technology features and functionality and provide feedback to Microsoft.? You may not use it for any other purpose. You may not modify, copy, distribute, transmit, display, perform, reproduce, publish, license, create derivative works from, transfer, or sell this Hands-on Lab or any portion thereof.COPYING OR REPRODUCTION OF THE HANDS-ON LAB (OR ANY PORTION OF IT) TO ANY OTHER SERVER OR LOCATION FOR FURTHER REPRODUCTION OR REDISTRIBUTION IS EXPRESSLY PROHIBITED.THIS HANDS-ONLAB PROVIDES CERTAIN SOFTWARE TECHNOLOGY/PRODUCT FEATURES AND FUNCTIONALITY, INCLUDING?POTENTIAL NEW FEATURES AND CONCEPTS, IN A SIMULATED ENVIRONMENT WITHOUT COMPLEX SET-UP OR INSTALLATION FOR THE PURPOSE DESCRIBED ABOVE.? THE TECHNOLOGY/CONCEPTS REPRESENTED IN THIS HANDS-ON LAB MAY NOT REPRESENT FULL FEATURE FUNCTIONALITY AND MAY NOT WORK THE WAY A FINAL VERSION MAY WORK.? WE ALSO MAY NOT RELEASE A FINAL VERSION OF SUCH?FEATURES OR CONCEPTS.? YOUR EXPERIENCE WITH USING SUCH FEATURES AND FUNCITONALITY IN A PHYSICAL ENVIRONMENT MAY ALSO BE DIFFERENT.FEEDBACK. ?If you give feedback about the technology features, functionality and/or concepts described in this Hands-on Lab to Microsoft, you give to Microsoft, without charge, the right to use, share and commercialize your feedback in any way and for any purpose.? You also give to third parties, without charge, any patent rights needed for their products, technologies and services to use or interface with any specific parts of a Microsoft software or service that includes the feedback.? You will not give feedback that is subject to a license that requires Microsoft to license its software or documentation to third parties because we include your feedback in them.? These rights survive this agreement.MICROSOFT CORPORATION HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS WITH REGARD TO THE HANDS-ON LAB , INCLUDING ALL WARRANTIES AND CONDITIONS OF MERCHANTABILITY, WHETHER EXPRESS, IMPLIED OR STATUTORY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT.? MICROSOFT DOES NOT MAKE ANY ASSURANCES OR REPRESENTATIONS WITH REGARD TO THE ACCURACY OF THE RESULTS, OUTPUT THAT DERIVES FROM USE OF THE VIRTUAL LAB, OR SUITABILITY OF THE INFORMATION CONTAINED IN THE VIRTUAL LAB FOR ANY PURPOSE.DISCLAIMERThis lab contains only a portion of new features and enhancements in Microsoft SQL Server 2014. Some of the features might change in future releases of the product. In this lab, you will learn about some, but not all, new features. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download