SQL/JSON Features in Database 12

Disclaimer

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

SQL/JSON Features in Database 12.2

Oracle Database 12c Release 2 (12.2), the latest generation of the world's most popular database is now available in the Oracle Cloud. The first release of Oracle Database 12c included significant new SQL functionality for working with JSON documents stored in the database. Oracle Database 12c Release 2 includes five major areas of new JSON related functionality

1. Integration with Oracle Database In-Memory.

Customers who are licensed for the Oracle Database In-Memory option

will be able to use its capabilities to accelerate SQL queries over

JSON content.

2. JSON Data Guide.

The data guide allows developers to generate and query metadata that

describes the JSON documents they have stored in Oracle Database. The

JSON Data Guide can also be used to generate JSON schema documents and relational views that reflect the structure of the JSON content that

has been stored in the database.

3. PL/SQL Integration.

Oracle Database 12c includes a new PL/SQL document API that makes it

easy to use JSON path expressions to access and update the contents of

a JSON document.

4. JSON Generation.

Oracle Database 12c includes new SQL operators that allow the result

of a SQL query to be transformed into one or more JSON documents.

5. Spatial Integration.

Oracle Database 12c includes support for using Oracle Spatial to

index and query GeoJSON content. GeoJSON is a popular way of embedding

spatial information, such as locations, into JSON documents.

The tutorial focuses on items 2 thru 5.

Modules

1. Setup 2. Storing

JSON Documents in Oracle Database 12c 3. Accessing

JSON content using Oracle's Simplied Syntax For JSON 4. Accessing

JSON using JSON_VALUE and JSON_QUERY 5. Relational

access to JSON content using JSON_TABLE 6. Filtering

result sets using JSON_EXISTS 7. Indexing

JSON documents stored in Oracle Database 12c 8. JSON

Search Index

[PDFSource].html[12/9/2016 2:02:55 PM]

9. JSON

Data Guide 10. JSON

Generation 11. PL/SQL

Integration: Introducing the PL/SQL API for JSON 12. Spatial

Integration and Support for GeoJSON

1. Setup

1.1 Ensure the HR Sample Schema is available.

The Hands-on-Lab uses data from the HR Schema. The HR Schema is installed as part of the Oracle Database Sample Schemas. If the Oracle Database Sample Schemas have not been installed please follow the instructions found on the Oracle Sample Schemas GitHub Repository to install them.

1.2 Create the required user.

Connect to the database as an administrator and execute the following statements to create the user that will be used when running the Hands-on-Lab. This will ensure that the user has been granted the necessary permissions.

Note that the following code will create a user called STUDENT1, if necessary replace STUDENT1 with an appropriate value before executing the code in your environment.

--- Grant access tables in HR Schema. -def USERNAME=STUDENT1 -grant CREATE SESSION, RESOURCE to &USERNAME identified by &USERNAME / grant UNLIMITED TABLESPACE to HOLTEST1 / grant CREATE VIEW to HOLTEST1 / grant all on HR.COUNTRIES to &USERNAME / grant all on HR.LOCATIONS to &USERNAME / grant all on HR.DEPARTMENTS to &USERNAME / grant all on HR.EMPLOYEES to &USERNAME / grant all on HR.JOBS to &USERNAME / grant all on HR.JOB_HISTORY to &USERNAME / ALTER USER &USERNAME PASSWORD EXPIRE /

Once the user has been created connect to the database as the newly created user set a new password.

1.3 Obtain sample data.

The test data required to run this Hands-on-Lab can be downloaded

from the following locations. Save the files to a folder on your

local file system by right-clicking on the following links and

performing a "Save As" operation on each file.

PurchaseOrders.dmp: db-sample-schemas

repository on GitHub

[PDFSource].html[12/9/2016 2:02:55 PM]

CityLots.json: CityLot

Data For San Francisco County on GitHub

1.4 Stage the sample data files

If you have write permissions on a folder that can be accessed

using the Oracle Database's BFILE mechanism copy these files to

that folder and create a SQL DIRECTORY object that points to

that folder. (Replace "/tmp" in the following example with the

path to the folder in question'. Note you will need to have been

granted the "CREATE ANY DIRECTORY" privilege to execute this

SQL.

create or replace DIRECTORY JOSN_HOL_FILES as '/tmp' /

1.5 Stage the sample data files to the Oracle XML DB repository using SQL Loader.

Run the follow statements to create the objects required to

allow SQL*LOADER to load files into the XDB Repository. Note you

will need to have been granted the "CREATE VIEW" privilege to

execute this SQL.

--

create table SQLLDR_REPOSITORY_TABLE (

RESULT NUMBER

)

/

create or replace view SQLLDR_REPOS_INTERFACE_VIEW

as

SELECT RESULT "RESULT",

CAST(NULL AS VARCHAR2(1024)) "RESOURCE_PATH",

CAST(NULL AS VARCHAR2(5)) "OVERWRITE",

TO_BLOB(NULL)

"CONTENT"

from SQLLDR_REPOSITORY_TABLE

/

create or replace trigger SQLLDR_REPOS_INTERFACE_TRIGGER

instead of INSERT on SQLLDR_REPOS_INTERFACE_VIEW

begin

null;

end;

/

create or replace package SQLLDR_REPOSITORY_INTERFACE

authid CURRENT_USER

as

function UPLOAD_CONTENT(P_RESOURCE_PATH VARCHAR2, P_OVERWRITE VARCHAR2, P_CONTENT BLOB) return NUMBER;

end;

/

show errors

--

create or replace package body SQLLDR_REPOSITORY_INTERFACE

as

--

function UPLOAD_CONTENT(P_RESOURCE_PATH VARCHAR2, P_OVERWRITE VARCHAR2, P_CONTENT BLOB)

return NUMBER

as

V_RESULT

BOOLEAN;

V_PARENT_PATH

VARCHAR2(700) := '/';

begin

-- Delete any existing resource

if (DBMS_XDB.existsResource(P_RESOURCE_PATH)) then if (P_OVERWRITE = 'TRUE') then

[PDFSource].html[12/9/2016 2:02:55 PM]

DBMS_XDB.deleteResource(P_RESOURCE_PATH); else

return 0; end if; end if;

-- Ensure the Target Folder Exists

while (instr(substr(P_RESOURCE_PATH,length(V_PARENT_PATH)+1),'/') > 0) loop V_PARENT_PATH := substr(P_RESOURCE_PATH,1,instr(substr(P_RESOURCE_PATH,length(V_PARENT_PATH)+1),'/') +

length(V_PARENT_PATH)-1); if not(DBMS_XDB.existsResource(V_PARENT_PATH)) then V_RESULT := DBMS_XDB.createFolder(V_PARENT_PATH); end if; V_PARENT_PATH := V_PARENT_PATH || '/';

end loop;

-- Create the new Resource;

V_RESULT := DBMS_XDB.createResource(P_RESOURCE_PATH,P_CONTENT);

return 1;

end;

--

end;

/

show errors

--

create or replace function READ_LINES(P_RESOURCE_PATH VARCHAR2)

return XDB.XDB$STRING_LIST_T pipelined

as

V_CONTENT

CLOB

:= XDBURITYPE(P_RESOURCE_PATH).getClob();

V_CONTENT_LENGTH PLS_INTEGER := DBMS_LOB.getLength(V_CONTENT);

V_AMOUNT

PLS_INTEGER := 32767;

V_OFFSET

PLS_INTEGER := 1;

V_BUFFER

VARCHAR2(32767);

V_BUFFER_LENGTH

PLS_INTEGER;

V_BUFFER_OFFSET

PLS_INTEGER := 1;

V_NEWLINE_INDEX

PLS_INTEGER := 0;

V_NEXT_LINE

VARCHAR2(32767);

begin

while (V_OFFSET < V_CONTENT_LENGTH ) loop

DBMS_LOB.READ(V_CONTENT,V_AMOUNT,V_OFFSET,V_BUFFER);

V_OFFSET := V_OFFSET + V_AMOUNT;

V_BUFFER_LENGTH := V_AMOUNT;

V_AMOUNT := 32767;

V_BUFFER_OFFSET := 1;

V_NEWLINE_INDEX := INSTR(V_BUFFER,CHR(10),V_BUFFER_OFFSET);

WHILE (V_NEWLINE_INDEX > 0) loop

V_NEXT_LINE := V_NEXT_LINE || SUBSTR(V_BUFFER,V_BUFFER_OFFSET,(V_NEWLINE_INDEX-V_BUFFER_OFFSET));

pipe row (V_NEXT_LINE);

V_NEXT_LINE := NULL;

V_BUFFER_OFFSET := V_NEWLINE_INDEX + 1;

if (SUBSTR(V_BUFFER,V_BUFFER_OFFSET,1) = CHR(13)) then

V_BUFFER_OFFSET := V_BUFFER_OFFSET + 1;

end if;

V_NEWLINE_INDEX := INSTR(V_BUFFER,CHR(10),V_BUFFER_OFFSET);

end loop;

V_NEXT_LINE := SUBSTR(V_BUFFER,V_BUFFER_OFFSET);

end loop;

if (length(V_NEXT_LINE) > 0) then

pipe row (V_NEXT_LINE);

end if;

end;

/

show errors

--

[PDFSource].html[12/9/2016 2:02:55 PM]

Create a SQL Loader control file called SAMPLE_DATA_FILES.ctl. The content of the control file is shown below. Place this file in the folder that conains the sample data files.

load data

infile *

append into table

SQLLDR_REPOS_INTERFACE_VIEW (

SOURCE

FILLER CHAR(32),

OVERWRITE

CHAR(5),

FILLER

FILLER CHAR(1),

RESOURCE_PATH

CHAR(64),

CONTENT

lobfile(SOURCE) TERMINATED BY EOF,

RESULT

EXPRESSION "SQLLDR_REPOSITORY_INTERFACE.UPLOAD_CONTENT(:RESOURCE_PATH,:OVERWRITE,:CONTENT)"

)

begindata

purchaseOrders.dmp

TRUE /public/tutorials/json/testdata/purchaseOrders.dmp

cityLots.json

TRUE /public/tutorials/json/testdata/cityLots.json

Execute the following command to load the sample data files into the XMLDB Repository

sqlldr STUDENT1@TNSALIAS control=SAMPLE__DATAFILES.ctl

1.6 Initialize the Tutorial.

Run the following statements to initialize the Tutorial. These statements can also be used to reset the tutorial if you want to start again at any point.

create or replace view EMPLOYEE_KEY_VALUE as select EMPLOYEE_ID as ID,

'EmployeeId' as KEY, to_char(EMPLOYEE_ID) as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID, 'FirstName' as KEY, FIRST_NAME as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID, 'LastName' as KEY, LAST_NAME as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID, 'EmailAddress' as KEY, EMAIL as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID, 'TelephoneNumber' as KEY, PHONE_NUMBER as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID,

[PDFSource].html[12/9/2016 2:02:55 PM]

'HireDate' as KEY, to_char(HIRE_DATE) as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID, 'JobId' as KEY, JOB_ID as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID, 'Salary' as KEY, to_char(SALARY) as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID, 'Commision' as KEY, to_char(COMMISSION_PCT) as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID, 'ManagerId' as KEY, to_char(MANAGER_ID) as VALUE from HR.EMPLOYEES union all select EMPLOYEE_ID as ID, 'DepartmentId' as KEY, to_char(DEPARTMENT_ID) as VALUE from HR.EMPLOYEES / declare cursor getTable is select TABLE_NAME from ALL_TABLES where TABLE_NAME in ('J_PURCHASEORDER', 'JSON_DUMP_CONTENTS','CITY_LOT_FEATURES') and OWNER = SYS_CONTEXT('USERENV','CURRENT_USER'); begin for t in getTable() loop execute immediate 'DROP TABLE "' || t.TABLE_NAME || '" PURGE'; end loop; end; /

2. Storing JSON Documents in Oracle Database 12c

Create a simple table to store JSON documents

In Oracle there is no dedicated JSON data type. JSON documents are

stored in the database using standard Oracle data types such as

VARCHAR2, CLOB and BLOB. VARCHAR2 can be used where the size of the JSON

document will never exceed 4K (32K in database where when LONG VARCHAR

support has been enabled.) Larger documents are stored using CLOB or

BLOB data types.

In order to ensure that the content of the column is valid JSON data, a

new constraint IS JSON, is provided that can be applied to a column.

This constraint returns TRUE if the content of the column is well

formatted JSON and FALSE otherwise

This first statement in this module creates a table which will be used

to contain JSON documents.

create table J_PURCHASEORDER (

ID

RAW(16) NOT NULL,

DATE_LOADED TIMESTAMP(6) WITH TIME ZONE,

PO_DOCUMENT CLOB CHECK (PO_DOCUMENT IS JSON)

[PDFSource].html[12/9/2016 2:02:55 PM]

) /

This statement creates a very simple table, J_PURCHASEORDER.

The table has a column PO_DOCUMENT of type CLOB. The IS JSON constraint is applied to the column PO_DOCUMENT, ensuring that the column can store only well formed JSON documents.

Loading JSON Documents into the database

JSON documents can come from a number of different sources. Since

Oracle stores JSON data using standard SQL data types, all of the

popular Oracle APIs can be used to load JSON documents into the

database. JSON documents contained in files can be loaded directly into

the database using External Tables.

This statement creates a simple external table that can read JSON

documents from a dump file generated by a typical No-SQL style database.

In this case, the documents are contained in the file

purchaseOrders.json. The SQL directory object JOSN_HOL_FILES points to the

folder containing the dump file, and this folder also contain any log or bad files generated when the External Table is processed.

Note External Tables are not supported when using Oracle Cloud managed database services such as Exadata Express.

CREATE TABLE DUMP_FILE_CONTENTS( PO_DOCUMENT CLOB

) ORGANIZATION EXTERNAL(

TYPE ORACLE_LOADER DEFAULT DIRECTORY ORDER_ENTRY ACCESS PARAMETERS (

RECORDS DELIMITED BY 0x'0A' BADFILE JOSN_HOL_FILES: 'JSON_DUMPFILE_CONTENTS.bad' LOGFILE JOSN_HOL_FILES: 'JSON_DUMPFILE_CONTENTS.log' FIELDS(

JSON_DOCUMENT CHAR(5000) ) ) LOCATION ( JOSN_HOL_FILES:'purchaseOrders.dmp' ) ) PARALLEL REJECT LIMIT UNLIMITED /

The following code creates a view which will act as the equivilant of the external table in the above example. This alternative should be used when running the Hands-on-Lab in an environment where SQL DIRECTORY objects are not available, such as when using the Oracle Exadata Express cloud service. This view loads the dump file from the Oracle XMLDB repository.

create or replace view JSON_DUMP_CONTENTS ( PO_DOCUMENT ) as select COLUMN_VALUE JSON_DOCUMENT

from TABLE(READ_LINES('/public/tutorials/json/testdata/purchaseOrders.dmp')) /

[PDFSource].html[12/9/2016 2:02:55 PM]

The following statement copies the JSON documents from the

dump file into the J_PURCHASEORDER table.

insert into J_PURCHASEORDER select SYS_GUID(), SYSTIMESTAMP, PO_DOCUMENT

from JSON_DUMP_CONTENTS where PO_DOCUMENT IS JSON

and rownum < 1001 / commit /

The IS JSON condition is used to ensure that the insert

operation takes place only for well formed documents. Make sure that the

commit statement is executed after the insert statement has completed.

SYS_GUID and SYSTIMESTAMP are used to populate columns ID and

DATE_LOADED.

3. Accessing JSON content using Oracle's Simplied Syntax For JSON

Oracle 12c allows a simple "dotted" notation to be used to perform a

limited set of operations on columns containing JSON. It also introduces

a set of SQL operators that allow the full power of the JSON path

language to be used with JSON. This is similar to the way in which the

database allows XQuery to be used to access the contents of XML documents stored in the database.

Accessing JSON using simplified syntax

The dotted notation can be used to perform basic operations on JSON

stored in Oracle 12c. Using the dotted notation you can access the value

of any keys contained in the JSON document. In order to use the dotted

notation, a table alias must be assigned to the table in the FROM

clause, and any reference to the JSON column must be prefixed with the assigned alias. All data is returned as VARCHAR2(4000). The following examples demonstrate how to use the simplified syntax to

extract values from an JSON document and how to filter a result set

based on the content of a JSON document. The first query shows the count

of PurchaseOrder documents by cost center

select j.PO_DOCUMENT.CostCenter, count(*) from J_PURCHASEORDER j

group by j.PO_DOCUMENT.CostCenter order by j.PO_DOCUMENT.CostCenter /

The second query shows how to fetch the JSON document where the

PONumber key has the value 450:

select j.PO_DOCUMENT from J_PURCHASEORDER j

where j.PO_DOCUMENT.PONumber = 450 /

The third query shows how to fetch data from any document where

the key PONumber contains the value 450. The

[PDFSource].html[12/9/2016 2:02:55 PM]

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download