Adobe Captivate



Slide 3 - Oracle Utilities Enterprise Cloud Solutions

[pic]

Slide notes

Hello, my name is Anthony. Welcome to training for the common services available in the Oracle Utilities Enterprise Cloud, covering the Customer Cloud Service, Meter Solution Cloud Service and Work and Asset Cloud Service

In this session we talk about enhancements to the background (also known as batch) processing in this new release.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 4 - Agenda

[pic]

Slide notes

For the enhancements covered in this training, we’ll give an overview, followed by more detail to explain how you can use them, and what business value they bring.

Then we’ll walk you through a demonstration.

Lastly we’ll explain what you need to consider before enabling these features in your business, and what you need to know to set them up.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 5 - Enhancements Overview

[pic]

Slide notes

There are a number of key enhancements to improve operations of batch and support batch integration to the cloud infrastructure.

These enhancements include:

Root node in the extract process can be supressed, if necessary.

A manifest file, in JSON format, of the extract process containing details of the process can be optionally produced.

Files used in the import process no longer need to be decompressed prior to execution. Likewise, it is possible to compress extract files as part of the extraction process within the scope of the extraction process itself.

Sharing object storage across environments is common for customers, so it now possible to prefix imports and exports to implement virtual buckets within Object Storage.

The Batch Scheduler API has been improved to return additional details when querying the state of a process.

In alignment with auditing policies, the batch framework will record who and how the batch process was initiated.

They are grouped according to the area they alter.

Notes

| |

| |

Slide 6 - Root Node Suppression

[pic]

Slide notes

By default, all the element in the XML used to configure the output using the Plug In Batch Extract capability deposits the root node (the first node in the schema) to encompass all the records in the extract. Some target applications have trouble with these root nodes in processing.

The Plug In Batch Extract template (F1-PDBEX) has been enhanced to permit implementation teams to allow the extract to suppress the output of these root tags in the output file. This allows for flexibility in designing and extracting data from the cloud supporting external applications that may not require a root node in the extract.

The xmlRootName parameter that usually holds the name of the root node can hold the value suppress to suppress the output of this node to the extract file.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 7 - Manifest File Support

[pic]

Slide notes

One of the most common patterns in providing files is to provide what is termed a manifest file. This file lists important information about the files that can be used by target applications to check the file integrity as well and provide volumetrics for the target system to use. This is important to provide information to help improve the data integrity associated with data transfers and improve communication between the cloud service and external parties.

The cloud service has been extended to allow for these files to be produced via the Plug In Batch Extract capability. The file is formed in JSON format and contains important information about the file or files produced by the process.

If enabled, the file will be co-located with the extracted files with the name containing the Batch Control Code, Batch Run Number and Rerun Number (if applicable).

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 8 - File Compression/Decompression Support

[pic]

Slide notes

To save transmission volume between the Oracle Cloud and external parties, for imports and extracts, the Plug In Batch capability has been extended to support compression and decompression of files automatically. The capability supports both the gzip and zip standards for compression and decompression for both imports and extracts. If the extract uses concatenation though, then the gzip compression method is only supported, in that case.

To reduce implementation costs, the capability will automatically compress or decompress the file based upon the file name with the appropriate suffix for any Plug In Batch based batch processes.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 9 - Bucket Prefix Support

[pic]

Slide notes

Customer using Object Storage tend to share that storage across environments to save costs. To segregate access to that storage, Object Storage implements configurable buckets. As such a common practice to have each bucket contain the same directory structures to ensure consistency when using the storage. For example, an implementation might want to have a bucket for payment_files and perhaps meter_read_files, amongst others. In each environment it would be required that you would have to alter each job to point to each bucket individually which may not be cost effective.

In this release, the cloud service's Object Storage adapter has been enhanced to optionally set a prefix for the bucket at a global level to allow segregation at a global level rather than at the individual job level. This prefix is attached at runtime.

For example you can set up a prefix test_ and set the Object Storage Adapter in that environment to use the prefix. This would allow segregation of resources at a lower cost so that the file paths on individual jobs can remain the same but use different buckets. In our example, the test buckets would be test_payment_files and test_meter_read_files. The production buckets can have another prefix or not use the prefix. This is the same technique used in other Oracle Cloud services such as Oracle SOA Cloud to segregate resources.

This technique will save configuration time across environments by ensuring segregation of data is automatically supported at the Object Storage File Adapter level.

Notes

| |

| |

| |

Slide 10 - Batch REST API Improvements

[pic]

Slide notes

The Batch Scheduler and Batch Framework exposes a number of REST based API's to expose functions to Oracle Utilities Cloud Service Foundation and external schedulers. A number of API changes have been implemented to provide additional information. These enhancements are there to improve integration with both internal and external schedulers.

One of the Oracle Scheduler integration API's, F1-DBMSGetJobDetails, used to return details of threads of a job in progress. In this release it has been enhanced to return the Batch Code, Run Number and rerun number, as well as other information for all threads regardless of state. For backward compatibility, a new parameter, isInProgress, can be used to filter to view only threads executing as in previous releases. This API is used for state of threads for a job from Cloud Service Foundation or where an external scheduler wants to reuse the Oracle Scheduler as a sub ordinate scheduler.

For implementations wanting to use an external scheduler instead of the inbuilt scheduler to schedule executions, the API designed for integration, F1-SubmitJob has been extended to return Batch Id, with other information. The Batch Id then can be used with other API's available for the externa scheduler to monitor the state of the individual batch or thread it scheduled.

Notes

| |

| |

| |

| |

| |

| |

Slide 11 - Recording User and Submission Method

[pic]

Slide notes

There are numerous methods for executing batch within the cloud service. For auditing purposes it is important to understand who initiate the execution and which method was used for the execution. The batch framework has been enhanced to record the user and the method of execution. This improves traceability of execution for diagnosis of issues and will be used in analytics.

In this release a number of changes have been implemented regardless of the method used to execute the batch process:

The user used to initiate the process is recorded in additional to the user configured to be used in the batch process.

The method used for initiating the batch process is recorded which includes:

Online - populated when a user manually creates a batch job submission

Generated - populated by algorithms that submit a batch job and by 'initiator' batch jobs that submit other batch jobs.

Scheduled - populated by the Oracle DBMS Scheduler. 

Timed - populated by the batch daemon that submits timed jobs.

Other - populated when no other value is provided

This enhancement will not populate past executions with the desired values, Any execution after this upgrade will populate the values as expected. 

Note: For jobs initiated as Timed or via the DBMS Scheduler, the user field may not be populated as they are process driven not user driven.

Notes

Slide 12 - Batch Enhancements

[pic]

Slide notes

Batch or Background Processing is a significant part of the cloud service. The ability to process data in bulk within the service and integrated to third parties is important for success.

These enhancements improve the way the service interacts with third parties for data to reduce costs and risk, improve the way work is scheduled, both internally and externally, and improve the configuration and operations of these processes within the service.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 18 - Implementation Advice

[pic]

Slide notes

In this implementation advice section we will go through what you need to consider before enabling these features in your business, and what you need to know to set them up.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 19 - Feature Impact Guidelines

[pic]

Slide notes

This table depicts key update information for the features covered in this training.

Three of the features covered in this training are delivered with no additional configuration to use.

The Root Node Suppression, Manifest File and Object Storage Bucket Support are available with some basic configuration.

All features can be accessed through existing shipped job roles.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 20 - Feature Delivered Ready to Use Impact Analysis

[pic]

Slide notes

This table details the impact to your current business flows of features in this training that are delivered ready to use by your end-users.

The Compression/Decompression Support impact is small scale by specifying an appropriate suffix on the file name parameters for imports and exports for Plug In Batch.

The Batch API enhancements are transparent to existing implementations with additional information returned from the REST API responses.

Internally the user and submission method are recorded by the service whenever batch processes are executed.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 21 - Summary of Actions Needed to Use Features

[pic]

Slide notes

The enablement for the batch enhancements is the following:

There is no need to use Feature Configuration or Master Configuration to enable the facility.

The Root Node Suppression and Manifest File Creation enhancements can be enabled on the Plug In Batch Extract Batch Controls as parameters as desired.

Compression and Decompression support are automatic via the file name suffix for Plug In Batch based jobs.

Configuration of the Bucket Prefix is performed manually on the F1-Storage Extendable Lookup using the Object Storage Adapter. It is not available with the Native Adapter.

Batch API enhancements are automatically available to both Cloud Service Foundation and/or an external scheduler (as applicable).

In respect to the recording the user and submission methods, this will apply to any execution after this upgrade is implemented. If you have extensions, such as algorithms, that submit processes then you need to set the Submission method.

Notes

| |

| |

| |

Slide 22 - Enablement Detail for Root Node Suppression

[pic]

Slide notes

The configuration of the enablement of the root node suppression is on the Batch Control.

For batch controls based upon the F1-PDBEX template which uses the com.splwg.base.domain.batch.pluginDriven.PluginDrivenExtractProcess java class the xmlRootName configuration parameter has been enhanced to set the suppress value to remove the root node from the export file. This information can be set on the batch control for the batch controls based upon this template or overridden at runtime in scheduling.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 23 - Enablement Detail for Manifest File

[pic]

Slide notes

The configuration of the enablement of the manifest creation is on the Batch Control.

For batch controls based upon the F1-PDBEX template which uses the com.splwg.base.domain.batch.pluginDriven.PluginDrivenExtractProcess java class the manifestOption configuration parameter has been added to enable the creation of the manifest with the name outlined on the parameter. This information can be set on the batch control for the batch controls based upon this template or overridden at runtime in scheduling.

Setting the value to Y will enable the creation of a manifest.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 24 - Enablement Detail for Object Storage Bucket Prefix Support

[pic]

Slide notes

On the Master Configuration F1-FileStorage a number of aliases are defined for use with the service. To configure the bucket prefix appropriate for each environment, edit each alias entry and set the Bucket Name Prefix to the appropriate value.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 25 - Enablement Detail for Compression/Decompression

[pic]

Slide notes

The compression and decompression support is enabled on the Batch Controls for entries based upon the com.splwg.base.domain.batch.pluginDriven.PluginDrivenExtractProcess and com.splwg.base.domain.batch.pluginDriven.PluginDrivenUploadProcess java classes using the appropriate file suffix.

For zip files use the zip suffix and for gzip files use the gz suffix. Using these suffixes with automatically enable compression and decompression support for exports and imports respectively.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 26 - Enablement Detail for REST API Improvements

[pic]

Slide notes

The API changes for the SubmitJob and DBMSGetJobDetails are automatically added to the API and are available from the registry for the API including the OpenAPI specification. The API change includes documentation on the element names and their contents.

The OpenAPI specification is available for viewing from the View Specification after opening the service in the Inbound Web Services menu option.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 27 - Enablement Detail for User and Submission Method

[pic]

Slide notes

The user and submission method are automatically recorded no matter the method used to execute batch processes. These enhancements are visible from various screens and API's to allow for auditing this information.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 28 - Enablement Best Practices

[pic]

Slide notes

To take full advantage of the enhancements that are not automatically enabled do the following:

Use the Plug In Batch templates to take advantage of all the capabilities extended in those templates.

If desired, realign your Object Storage buckets and set the appropriate bucket prefix on the File Storage Extendable lookup for each environment. For production the use of the prefix is optional, if desired.

For algorithm extensions that submit batch processes, ensure that the submission method is set to Generated in your code. If this is not set then the value Other is used.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 32 - Job Roles

[pic]

Slide notes

This table details the typical generic job roles that will access the new capabilities covered in this training.

Administrators will configure the Extendable Lookup Settings for Object Storage Bucket support

The Business Architects or Developers will configure batch parameters relating to the root node suppression, manifest file creation and file compression/decompression support.

Notes

| |

| |

| |

| |

| |

| |

| |

| |

Slide 33 - Business Process Information

[pic]

Slide notes

The business processes associated with the new capabilities covered in this training are detailed as follows:

To configure Plug In Batch Exports with the enhancements

Set the Object Storage Bucket Prefix on the File Adapter Extendable Lookup for the environment.

Set the Batch Control settings for root node suppression, manifest file support and compression support.

To configure Plug In Batch Imports with the enhancements

Set the Object Storage Bucket Prefix on the File Adapter Extendable Lookup for the environment.

Set the Batch Control settings for decompression support.

To configure Batch Scheduling with the enhancements:

For internal scheduling the API will be exposed via the Cloud Service Foundation product.

For external scheduling the API will automatically expose the additional information for use in your external scheduler via the provided REST API

This concludes this presentation, thank you for listening. You can easily pause and rewind any of these slides if you require additional time to take in the detail

Notes

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download