Feed aggregator

Security Monitoring and Compliance

Oracle is introducing two new security cloud services, built upon Oracle’s secure unified big data platform, the Oracle Management Cloud. Oracle Management Cloud (OMC) is a is a suite of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle JET Slider in Foreach Loop

Andrejus Baranovski - Wed, 2017-03-29 13:55
While working in the project last week, I had a question from development team - how to render multiple Oracle JET Slider components in foreach loop. I thought this could be useful tip for other developers too.

You can get JET sample app from GitHub - JETSliderSample.

Take a look into dashboard.js, I have defined array with two elements, containing variables (value property variable must be observable, otherwise it will not receive changed data) required to initialize JET slider. Each array element, defines slider to be rendered in HTML. There is JS function which prints array content, it can be useful to access changed slider values:

HTML implementation contains foreach loop pointing to array from JS module. Each loop element prints JET slider. JET slider properties must be mapped with variables from array elements, otherwise slider would not function (if you are using it inside foreach loop):

This is how UI looks like. Multiple slider components are displayed through foreach loop. User can adjust slider values and print new values in JS function (hitting Submit button):

Croatian Grocery Retailer Konzum Deploys Oracle Retail Planning & Optimization Solutions

Oracle Press Releases - Wed, 2017-03-29 13:24
Press Release
Croatian Grocery Retailer Konzum Deploys Oracle Retail Planning & Optimization Solutions Grocery Retailer Enhances Collaboration with Suppliers

Redwood Shores, Calif.—Mar 29, 2017

Today, Oracle announced that Croatian Grocery Leader Konzum has implemented Oracle Retail Planning and Optimization solutions to promote business operations efficiency, enable smarter decision making, enhance collaboration with suppliers and improve alignment with the store network. Konzum is part of Agrokor, whose retail group has 2.000 stores in Adriatic region to be the dominant grocery retailer.

Part of this growth is a result of acquisitions which have led to the development of a diversified store network. Managing inventory levels for many different store layouts, means having to deal with a lot of exceptions on a regular basis. Since the exceptions are part of the daily operations, Konzum decided to embrace the issue and address it in a systematic manner.

Konzum is a long-time customer who leverages Oracle Retail Merchandise Operations Management, and Supply Chain Management solutions to drive operational efficiencies and enable growth. Recently, Konzum extended the partnership with extending the use of the Oracle Retail Predictive Application Server with a bespoke solution called Replenishment Parameters Management (REPAMA) developed by Oracle PartnerNetwork Gold Level sponsor Sigmia. The REPAMA enables direct management of the inventory levels in the system from the right team, rather than communicating required changes via error prone channels.

“The implementation of the REPAMA on the Oracle Retail Planning platform has reduced response times, increased efficiency in decision making and reduced the overall workload through better communication,” said Adrian Alajkovic, Inventory Management & Planning Director, Konzum.

"Any project engagement establishes a supplier-customer channel and always brings up the challenge to deliver real value to the customer. In partnership with Oracle, REPAMA has addressed this challenge recursively by delivering value to Konzum's consumers.  Achieving channel coordination is always a win-win strategy,” said Argiris Mokios, Supply Chain Solutions Director, Sigmia.

Back by the power of the Oracle Retail Merchandising and Supply Chain solution, this extension enables a case of real Vendor Managed Inventory in 700+ stores with the Oracle Retail Predictive Server extension on Planning and Optimization. Now Konzum is able to directly engage suppliers into managing inventory levels.

“Empowerment means giving authority, control and responsibility to the teams who are require information to act. Modern retailing transforms the user experience, infuses science into persona based dashboards,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail. “Retailing must become more intuitive and seamless, and technology can be leveraged better execute strategic goals.”

Oracle Retail @ The Poland & CEE Retail Summit 2017

Oracle Retail is the Platinum Sponsor of the POLAND & CEE RETAIL SUMMIT 2017 hosted on 29-30 March 2017 in Warsaw, Poland. The Retail Summit includes two intensive days filled with discussions about business and exchanging strategic information. It is also the best place to look for more effective models of cooperation between retail chains and suppliers as well as optimizing processes and business relations strengthening.

Contact Info
Matt Torres
Oracle
+1.415.595.1584
matt.torres@oracle.com
About Sigmia

Sigmia is a fast-growing IT consulting firm, specializing in retail planning information systems. Sigmia provides end-to-end system integration and consulting services for key merchandise planning and supply chain planning systems and processes. The company's mission is to help retailers worldwide leverage the power of integrated collaborative planning applications, enabling operational excellence, profitable growth and increased customer satisfaction. In essence, Sigmia builds systems and processes to help people make informed, data-driven business decisions.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Using SQL for Dashboard building

Nilesh Jethwa - Wed, 2017-03-29 13:00
From SQL Query to Analysis

InfoCaptor is extremely versatile dashboard application. It started initially just as "SQL Dashboard".

What do you mean by "SQL Dashboard"?

This was the MVP (minimal viable product) for a Dashboard tool.

The basic premise of the idea was that a developer can type any SQL query and produce the information. The information is then displayed into any kind of visual. All the widgets within the dashboard editor are data-aware.

Read more at http://www.infocaptor.com/dashboard/how-to-use-sql-to-build-dashboard

Oracle BPM: Time for Time Out (2)

Jan Kettenis - Wed, 2017-03-29 12:53
In a previous blog posting I discussed a solution to re-initiate a scope in BPMN that is supposed to time out after some time. In this posting I discuss how that solution inspires a couple of other use cases where a time out has to be re-initiated by calling an operation on the process.

In the following process model there are three flow, for three different use cases to re-initiate the time-out of:

  1. A process instance (top flow),
  2. An (asynchronous) Receive activity (middle flow),
  3. A User activity (bottom flow).



Re-initiate Timer for Process InstanceThe trick here is to use an Event Based gateway that either fires when the time-out occurs, or responds to the call to the re-initiation operation (Reinitiate Requested in the picture) which passes on a new duration. The Timeout Event Gateway is started again, whereby the the new duration is used to (re)schedule the Time Out timer. The reinitiate Gateway is necessary to loop back, and is the default. The condition of the no flow is "false".

The following picture shows the flow when that happens.


Re-initiate Timer for Receive ActivityThe re-initiation of the Receive activity happens through a Boundary Message event. The dummy Gateway does not do anything but is necessary to loop back to. The Receive is then rescheduled with a timer that has a new duration as passed on through the call.

The following picture shows the flow when that happens.

Re-initiate Timer for a User ActivityIn the previous two examples, the timer is completely (re)scheduled with the passed-on duration. In the bottom example the time-out of the User activity happens by setting the expiration on the Human Task. This is the recommended way as it will make the expiration visible in Workspace, and make sure the Human Workflow Engine properly cleans up the Human Task (which was not always the case in previous releases of the Oracle BPM Suite).

What happens in this scenario is that the expiry is actually not re-initiated but instead paused for a while using an Update activity with operation "Suspend Timers", then wait, and then continue the timer using an Update activity with operation "Resume Timers". This construction allows usage of an (non-interrupting) Event Subprocess, which has the advantage that it does not clutter the rest of the process model, you keep the same Human Task instance (with the same taskId) plus, if you have multiple Human Tasks at the same time, you can also use this construction to suspend other user activities as well.




The following picture shows the flow when that happens.

If you want to re-initiate the timer in a similar way as in the previous two use cases, then you can use the second solution with a Boundary Timer event and a Boundary Message event. The result will be that the Human Task is actually aborted (as said not in some older 11g versions), and then a new instance is created (with a new taskId!). Depending on your process model you can also put the User activity in a scope of its own, and re-initiate the timer of that as described in the previous posting on this topic.

Development and Runtime Experiences with a Canonical Data Model Part I: Standards & Guidelines

Amis Blog - Wed, 2017-03-29 12:21
Introduction

In my previous blog I’ve explained what a Canonical Data Model (CDM) is and why you should use it. This blog is about how to do this. I will share my experiences on how to create and use a CDM. I gained these experiences at several projects, small ones, and large ones. All of these experiences were related to an XML based CDM. This blog consists of three parts. This blogpost contains part I: Standards & Guidelines. The next blogpost, part two, is about XML Namespace Standards and the last blogpost contains part three about Dependency Management & Interface Tailoring.
This first part, about standards and naming conventions, primarily apply to XML, but the same principles and ideas will mostly apply to other formats, like JSON, as well. The second part about XML namespace standards only is, as it already indicates, applicable to an XML format CDM. The last part, in the third blogpost, about dependency management & interface tailoring entirely, applies to all kind of data formats.

Developing a CDM

About the way of creating a CDM. It’s not doable to create a complete CDM upfront and only then start designing services and developing them. This is because you only can determine usage of data, completeness and quality while developing the services and gaining experience in using them. A CDM is a ‘living’ model and will change in time.
When the software modules (systems or data stores) which are to be connected by the integration layer are being developed together, the CDM will change very often. While developing software you always encounter shortcomings in the design, unseen functional flaws, unexpected requirements or restrictions and changes in design because of new insights or changed functionality. So sometimes the CDM will even change on a daily base. This perfectly fits into the modern Agile Software Development methodologies, like Scrum, where changes are welcome.
When the development stage is finished and the integration layer (SOA environment) is in a maintenance stage, the CDM still will change, but at a much slower pace. It will keep on changing because of maintenance changes and modifications of connected systems or trading partners. Changes and modifications due to new functionality also causes new data entities and structures which have to be added to the CDM. These changes and modifications occur because business processes change in time, caused by a changing world, ranging from technical innovations to social behavioral changes.
In either way, the CDM will never be ready and reach a final changeless state, so a CDM should be flexible and created in such a way that it welcomes changes.

When you start creating a CDM, it’s wise to define standards and guidelines about defining the CDM and using it beforehand. Make a person (or group of persons in a large project), responsible for developing and defining the CDM. This means he defines the data definitions and structures of the CDM. When using XML this person is responsible for creating and maintaining the XML schema definition (XSD) files which represent the CDM. He develops the CDM based on requests from developers and designers. He must be able to understand the need of the developers, but he should also keep the model consistent, flexible and future proof. This means he must have experience in data modeling and the data format (e.g. XML or JSON) and specification language (e.g. XSD) being used. Of course, he also guards the standards and guidelines which has been set. He also is able, when needed, to deny requests for a CDM change from (senior) developers and designers in order to preserve a well-defined CDM and provide an alternative which meets their needs as well.

Standards & Guidelines

There are more or less three types of standards and guidelines when defining an XML data model:

  • Naming Conventions
  • Structure Standards
  • Namespace Standards
Naming Conventions

The most important advice is that you define naming conventions upfront and stick to them. Like all the naming convention in programming languages, there are a lot of options and often it’s a matter of personal preference. Changing conventions because of different personal preferences it not a good idea. Mixed conventions results in ugly code. Nevertheless I do have some recommendations.

Nodes versus types
The first one is to make a distinction between the name of a node (element or attribute) and an XML type. I’ve been in a project where the standard was to give them exactly the same name. In XML this is possible! But the drawback was that there were connecting systems and programming languages which couldn’t handle this! For example the standard Java library for XML parsing, JAX-P, had an issue with this. The Java code which was generated under the hood used the name of an XML type for a Java class name and the name of an element as a Java variable name. In Java it is not possible to use an identical name for both. In that specific project, this had to be fixed manually in the generated Java source code. That is not what you want! It can easily be avoided by using different names for types and elements.

Specific name for types
A second recommendation, which complements the advice above, is to use a specific naming convention for XML types, so their names always differ from node names. The advantage for developers is that they can recognize from the name if something is an XML node or an XML type. This eases XML development and makes the software code easier to read and understand and thus to maintain.
Often I’ve seen the naming convention, which tries to implements this, by prescribing that the name of an XML type should be suffixed with the token “Type”. I personally do not like this specific naming convention. Consider you have a “Person” entity and so you end up with an XML type named “PersonType”. This perfectly makes sense, doesn’t it? But how about a “Document” entity? You end up with an XML type named “DocumentType” and guess what: there is also going to be a “DocumentType” entity resulting in an XML type named “DocumentTypeType”…!? Very confusing in the first place. Secondly, you end up with an element and an XML type with the same name! The name “DocumentType” is used as a name for an element (of type “DocumentTypeType”) and “DocumentType” is used as an XML type (of an element named “Document”).
From experience I can tell you there are more entities with a name that ends with “Type” than you would expect!
My advice is to prefix an XML type with the character “t”. This not only prevents this problem, but it’s also shorter. Additionally you can distinguish an XML node from an XML type by the start of its name. This naming convention results into element names like “Person”, “Document” and “DocumentType” versus type names “tPerson”, “tDocument” and “tDocumentType”.

Use CamelCase – not_underscores
The third recommendation is to use Camel Case for names instead of using underscores as separator between the words which make up a name of a node or type. This shortens a name and still the name can be read easily. I’ve got a slight preference to start a name with an uppercase character, because then I can use camel Case beginning with a lowercase character for local variables in logic or translations (BPEL, xslt, etc) in the integration layer or tooling. This results in a node named “DocumentType” of type “tDocumentType” and when used in a local variable in code, this variable is named “documentType”.

Structure Standards

I also have some recommendations about standards which apply to the XML structure of the CDM.

Use elements only
The first one is to never use attributes, so only elements. You can never expand an attribute and create child elements in it. This may not be necessary at the moment, but may be necessary sometime in the future. Also an attribute cannot have the ‘null’ value in contrast with an element. You can argue that an empty value can represent the null value. But this is only possible with String type attributes (otherwise it’s considered as invalid XML when validating against its schema) and often there is a difference between an empty string and a null value. Another disadvantage is that you can not have multiple attributes with the same name inside an element.
Furthermore, using elements makes XML better readable by humans, so this helps developers in their coding and debugging. A good read about this subject is “Principles of XML design: When to use elements versus attributes”. This article contains a nice statement: “Elements are the extensible engine for expressing structure in XML.” And that’s exactly what you want when developing a CDM that will change in time.
The last advantage is that when the CDM only consists of elements, processing layers can add their own ‘processing’ attributes only for the purpose of helping the processing itself. This means that the result, the XML which is used in communicating with the world outside of the processing system, should be free of attributes again. Also processing attributes can be added in the interface, to provide extra information about the functionality of the interface. For example, when retrieving orders with operation getOrders, you might want to indicate for each order whether it has to be returned with or without customer product numbers:

<getOrdersRequest>
  <Orders>
    <Order includeCustProdIds='false'>
      <Id>123</Id>
    </Order>
    <Order includeCustProdIds='true'>
      <Id>125</Id>
    </Order>
    <Order includeCustProdIds='false'>
      <Id>128</Id>
    </Order>
  </Orders>
</getOrdersRequest>

Beware these attributes are processing or functionality related, so they should not be a data part the entity. And ask yourself if they are really necessary. You might consider to provide this extra functionality in a new operation, e.g. operation getCustProdIds to retrieve customer product ids or operation getOrderWithCustIds to retrieve order with customer product number.

All elements optional
The next advice is to make all the elements optional! There unexpectedly always is a system or business process which doesn’t need a certain (child) element of which you initially had thought it would always be necessary. On one project this was the case with id elements. Each data entity must have an id element, because the id element contains the functional unique identifying value for the data entity. But then there came a business case with a front end system that had screens in which the data entity was being created. Some of the input data had to be validated before the unique identifying value was known. So the request to the validation system contained the entity data without the identifying id element, so the mandatory id element had to be changed to an optional element. Of-course, you can solve this by creating a request which only contains the data that is used in separate elements, so without the use of the CDM element representing the entity. But one of the powers of a CDM is that there is one definition of an entity.
At that specific project, in time, more and more mandatory elements turned out to be optional somewhere. Likely this will happen at your project as well!

Use a ‘plural container’ element
There is, of course, an exception of an element which should be mandatory. That is the ‘plural container’ element, which only is a wrapper element around a single element which may occur multiple times. This is my next recommendation: when a data entity (XML structure) contains another data entity as a child element and this child element occurs two or more times, or there is a slight chance that this will happen in the future, then create a mandatory ‘plural container’ element which acts as a wrapper element that contains these child elements. A nice example of this is an address. More often than you might think, a data entity contains more than one address. When you have an order as data entity, it may contain a delivery address and a billing address, while you initially started with only the delivery address. So when initially there is only one address and the XML is created like this:

<Order>
  <Id>123</Id>
  <CustomerId>456/<CustomerId>
  <Address>
    <Street>My Street</Street>
    <ZipCode>23456</ZipCode>
    <City>A-town</City>
    <CountryCode>US</CountryCode>
    <UsageType>Delivery</UsageType>
  </Address>
  <Product>...</Product>
  <Product>...</Product>
  <Product>...</Product>
</Order>

Then you have a problem with backwards compatibility when you have to add the billing address. This is why it’s wise to create a plural container element for addresses, and for products as well. The name of this element will be the plural of the element it contains. Above XML will then become like this:

<Order>
  <Id>123</Id>
  <CustomerId>456/<CustomerId>
  <Addresses>
    <Address>
      <Street>My Street</Street>
      <ZipCode>23456</ZipCode>
      <City>A-town</City>
      <CountryCode>US</CountryCode>
      <UsageType>Delivery</UsageType>
    </Address>
  </Addresses>
  <Products>
    <Product>...</Product>
    <Product>...</Product>
    <Product>...</Product>
  </Products>
</Order>

In the structure definition, the XML Schema Definition (XSD), define the plural container element to be single and mandatory. Make its child elements optional and without a maximum of occurrences. First this results in maximum flexibility and second, in this way there is only one way of constructing XML data that doesn’t have any child elements. In contrast, when you make the plural container element optional, you can create XML data that doesn’t have any child element in two ways, by omitting the plural container element completely and by adding it without any child elements. You may want to solve this by dictating that child elements always have at least one element, but then the next advantage, discussed below, is lost.
So the XML data example of above will be modeled as follows:

<complexType name="tOrder">
  <sequence>
    <element name="Id" type="string" minOccurs="0" maxOccurs="1"/>
    <element name="CustomerId" type="string" minOccurs="0" maxOccurs="1"/>
    <element name="Addresses" minOccurs="1" maxOccurs="1">
      <complexType>
        <sequence>
          <element name="Address" type="tns:tAddress" minOccurs="0" maxOccurs="unbounded"/>
        </sequence>
      </complexType>
    </element>
    <element name="Products" minOccurs="1" maxOccurs="1">
      <complexType>
        <sequence>
          <element name="Product" type="tns:tProduct" minOccurs="0" maxOccurs="unbounded"/>
        </sequence>
      </complexType>
    </element>
  </sequence>
</complexType>
<complexType name="tAddress">
  <sequence>
    ...
  </sequence>
</complexType>
<complexType name="tProduct">
  <sequence>
    ...
  </sequence>
</complexType>

There is another advantage of this construction for developers. When there is a mandatory plural container element, this elements acts as a kind of anchor or ‘join point’ when XML data has be modified in the software and for example, child elements have to be added. As this element is mandatory, it’s always present in the XML data that has to be changed, even if there are no child elements yet. So the code of a software developer can safely ‘navigate’ to this element and make changes, e.g. adding child elements. This eases the work of a developer.

Be careful with restrictions
You never know beforehand with which systems or trading partners the integration layer will connect in future. When you define restrictions in your CDM, beware of this. For example restricting a string type to a list of possible values (enumeration) is very risky. What to do when in future another possible value is added?
Even a more flexible restriction, like a regular expression can soon become too strict as well. Take for example the top level domain names on internet. It once was restricted to two character abbreviations for countries, some other three character abbreviations (“net”, “com”, “org”, “gov”, “edu”) and one four character word “info”, but that’s history now!
This risk applies for all restrictions, restriction on character length, numeric restrictions, restriction on value ranges, etc.
Likewise I bet that the length of product id’s in the new version of your ERP system will exceed the current one.
My advice is to minimize restriction as much as possible in your CDM, preferable no restrictions at all!
Instead define restrictions on the interfaces, the API to the connection systems. When for example the product id of your current ERP system is restricted to 8 characters, it perfectly makes sense that you define a restriction on the interface with that system. More on this in part III in my last blogpost in the section about Interface Tailoring.

String type for id elements
Actually this one is the same as the one above about restrictions. I want to discuss this one separately, because of its importance and because it often goes wrong. Defining an id element as a numeric type is a way of applying a nummeric restriction to a string type id.
The advice is to make all identifying elements (id, code, etc) of type string and never a numeric type! Even when they always get a numeric value… for now! The integration layer may in future connect to another system that uses non-numeric values for an id element or an existing system may be replaced by a system that uses non-numeric id’s. Only make those elements numeric which truly contain numbers, so the value has a nummeric meaning. You can check this by asking yourself whether it functionally makes sense to calculate with the value or not. So for example phone numbers should be strings. Also when there is a check (algorithm) based on the sequence of the digits whether a number is valid or not (e.g. bank account check digit), this means the number serves as an identification and thus should be a string type element! Another way to detect numbers which are used as identification, is to determine if it matters when you add a preceding zero to the value. If that does matter, it means it’s not used nummeric. After all, preceding zero’s doesn’t change a nummeric value.

Determine null usage
The usage of the null value in XML (xsi:nil=”true”) always leads to lots of discussions. The most import advice is to explicitly define standards & rules and communicate them! Decide whether the null usage is allowed or not. If so, determine in what situation it is allowed and what it functionally means. Ask yourself how it is used and how it differs from an element being absent (optional elements).
For example I’ve been in a project where a lot of data was updated in the database. An element being absent meant that a value didn’t change, while a null value meant that for a container element it’s record had be deleted and for a ‘value’ element that the database value had to be set to null.
The most important advice in this is: Make up your mind, decide, document and communicate it!

To summarize this first part of naming conventions and guidelines:

  • Keep in mind that a CDM keeps on changing, so it’s never finished
  • Define naming and structure standards upfront
  • and communicate your standards and guidelines!

When creating a CDM in the XML format, you also have to think about namespaces and how to design the XML. This is where the second part in my next blogpost is all about. When you are not defining a CDM in the XML format, you can skip this one and immediately go to the third and last blogpost about dependency management & interface tailoring.

The post Development and Runtime Experiences with a Canonical Data Model Part I: Standards & Guidelines appeared first on AMIS Oracle and Java Blog.

Development and Runtime Experiences with a Canonical Data Model Part II: XML Namespace Standards

Amis Blog - Wed, 2017-03-29 12:21

This blog is about XML namespace standards. Primary for using them in a Canonical Data Model (CDM), but also interesting for anyone who has to define XML data by creating XML Schema files (XSD). This blogpost is the second part of a trilogy about my experiences in using and developing a CDM. The first blogpost is about naming & structure standards and the third blogpost is about dependency management & interface tailoring.

XML Namespace Standards

A very important part of an XML model, is its namespace. With a namespace you can bind an XML model to specific domain and can represent a company, a business domain, a system, a service or even a single component or layer within a service. For a CDM model this means that choices have to be made. Use one or more namespaces. How to deal with newer versions of a CDM, etc.

Two approaches: one generic namespace vs component specific namespaces
Basically I’ve come across two approaches of defining a namespace in a CDM. Both ways can be a good approach, but you have to choose one based on your specific project characteristics.

  1. The first approach is to use one generic fixed namespace for the entire CDM. This may also be the ’empty’ namespace which looks like there is no namespace. This approach of one generic fixed namespace is useful when you have a central CDM that is available at run time and all services refer to this central CDM. When you go for this approach, go for one namespace only, so do not use different namespaces within the CDM.
    For maintenance and to keep the CDM manageable, it can be useful to split up the CDM into more definition files (XSD’s), each one representing a different group (domain) of entities. However my advise is to still use the same namespace in all of these definition files. The reason is that in time the CDM will change and you may want to move entities from one group to another group or you wan to split up a group. When each group had its own namespace, you would have gotten a problem with backward compatibility. That’s because an element which moves from one group to another, would then have changed from its namespace.
    When at a certain moment you’re going to have a huge amount of changes which also impacts the running software, you can create a new version of the CDM. Examples of such situations are connecting a new external system or replacing an important system by another system. In case you have more versions of the CDM, each version must have its own namespace where the version number is part of the name of the namespace. New functionality can now be developed with the new version of the CDM. When it uses existing functionality (e.g. calling an existing service) it has to transform the data from the new version of the CDM to the old version (and vice versa).

  2. The second approach is that each software component (e.g. a SOAP webservice) has its own specific namespace. This specific namespaces is used as the namespace for a copy of the CDM. The software component uses this copy of the CDM. You can consider it as ‘his’ own copy of the CDM. A central runtime CDM is not needed any more. This means that the software components have no runtime dependencies on the CDM! The result is that the software components can be deployed and run independent of the current version of the CDM. This is the most important advantage!
    The way to achieve this is to have a central CDM without a namespace (or a dummy namespace like ‘xxx’), which is only available as an off-line library at design time. So there even is no run time CDM to reference to!
    Developers need to create a hard coded copy of the CDM for the software component they are building and apply a namespace to the copy. The name of this namespace is specific for that software component and typically includes the name (and version) of the software component itself. Because the software component is the ‘owner’ of this copy, the parts (entities) of CDM which are not used by the software component, can be removed from this copy.

In part III in my last blogpost about run time dependencies and interface tailoring I will advise when to use the first and when to use the second approach. First some words about XML patterns and their usage in these two namespace approaches.

XML Patterns
XML patterns are design patterns, applicable to the design of XML. Because the design of XML is defined by XML Schema, XSD files, these XML patterns actually are XML Schema (XSD) patterns. These design patterns describe a specific way of modeling XML. Different ways of modeling can result into the same XML, but may be different in terms of maintenance, flexibility, ease of extension, etc.
As far as I know, there are four XML patterns: “Russian Doll”, “Salami Slice”, “Venetian Blind” and “Garden of Eden”. I’m not going to describe these patterns, because that has already be done by others. For a good description of the first three, see http://www.xfront.com/GlobalVersusLocal.html and http://www.oracle.com/technetwork/java/design-patterns-142138.html gives for a brief summary of all four. I advise you to read and understand them when you want to setup an XML type CDM.

I’ve described two approaches of using a CDM above, a central run-time referenced CDM and a design time only CDM. So the question is, which XML design pattern matches best for each approach?

When you’re going for the first approach, a central run-time-referenced CDM, there are no translations necessary when passing (a part of) an XML payload from one service to another service. This is easier compared with the second approach where each service has a different namespace. Because there are no translations necessary and the services need to reference parts of entities as well as entire entity elements, it’s advisable to use the “Salami Slice” or the “Garden of Eden” pattern. They both have all elements defined globaly, so it’s easy to reuse them. With the “Garden of Eden” patterns types are defined globally as well and thus reusable providing more flexibility and freedom to designers and developers. The downside is that you end up with a very scattered and verbose CDM.
So solve this disadvantage, you can go for the “Venetian Blind” pattern and set the schema attribute “elementFormDefault” to “unqualified” and do not include any element definitions in the root of the schema’s (XSD) which make up the CDM. This means there are only XML type definitions in the root of the schema(s), so CDM is defined by types. The software components, e.g. a web service, do have their own namespace. In this way the software components define a namespace (through their XSD or WSDL) for the root element of the payload (in the SOAP body), while all the child elements below this root remain ‘namespace-less’.
This makes the life of an developer easier as there is no namespace and thus no prefixes needed the payloads messages. No dealing with namespaces in all transformation, validation and processing software that works with those messages makes programming code (e.g. xslt) less complicated, so less error prone.
This leads to my advise that:

The “Venetion Blind” pattern with the schema attribute “elementFormDefault” set to “unqualified” and no elements in the root of the schema’s, is the best XML pattern for the approach of using a central run-time referenced CDM.

When you’re going for the second option, no runtime CDM, but only a design time CDM, you shouldn’t use a model which results in payloads (or part of the payloads) of different services having exact the same namespace. So you cannot use the “Venetian Blind” pattern with “elementFormDefault” set to “unqualified” which I have just explained. You can still can use the “Salami Slice” or “Garden of Eden” pattern, but the disadvantages of large scattered and verbose CDM remain.
The reason that you can not have the same namespace for the payload of services with this approach is because the services have their own copy (‘version’) of the CDM. When (parts of) payloads of different services have the same element with also the same namespace (or the empty namespace), the XML structure of both is considered to be exactly equal, while that need not be the case!. When they are not the same you have a problem when services need to call each other and payloads are passed to each other. They can already be different at design time and then it’s quite obvious.
Much more dangerous is that they even can become different later in time without even being noticed! To explain this, assume that at a certain time two software components were developed, they used the same CDM version, so the XML structure was the same. But what if one of them changes later in time and these changes are considered as backwards compatible (resulting in a new minor version). The design time CDM has changed, so the newer version of this service uses this newer CDM version. The other service did not change and now receives a payload from the changed service with elements of a newer version of the CDM. Hopefully this unchanged service can handle this new CDM format correctly, but it might not! Another problem is that it might break its own contract (WSDL) when this service copies the new CDM entities (or part of it) to its response of caller. Thus breaking its own contract while the service itself has not changed! Keep in mind its WSDL still uses the old CDM definitions of the entities in the payload.
Graphically explained:
Breach of Service Contract
Service B calls Service A and retrieves a (part of) the payload entity X from Service A. Service B uses this entity to return it to his consumers as (part of) payload. This is all nice and correct according to its service contract (WSDL).
Later in time, Service A is updated to version 1.1 and the newer version of the CDM is used in this updated version. In the newer CDM version, entity X has also been updated to X’. Now this X’ entity is passed from Service A to Service B. Service B returns this new entity X’ to its consumers, while they expect the original X entity. So service B returns an invalid response and breaks its own contract!
You can imagine what happens when there is a chain of services and probably there are more consumers of Service A. Such an update can spread out through the entire integration layer (SOA environment) like ripples on water!
You don’t want to update all the services in the chains effected by such a little update.
I’m aware a service should not do this. Theoretically a service is fully responsible that always complies to its own contract (WSDL), but this is very difficult to implement this when developing lots of services. When there is a mapping in a service, this is quite clear, but all mapping should be checked. However an XML entity often is used as variable (e.g. BPEL) in some processing code and can be passed to a caller unnoticed.
The only solution is to avoid passing complete entities (container elements), so, when passing through, all data fields (data elements) have to be mapped individually (in a so called transformation) for all incoming and outgoing data of the service.
The problem is that you cannot enforce software to do this, so this must become a rule, a standard, for software developers.
Everyone, who has been in a software development for some years, knows this is not going to work. There will always be a software developer (at that moment or maybe in future for maintenance) not knowing or understanding this standard.
The best way to prevent this problem, is to give each service its own namespace, so entities (container elements) cannot be copied and passed through in its entirety and thus developers have to map the data elements individually.

This is why I advise for the approach of a design time only CDM to also use the “Venetian Blind” pattern, but now with the schema attribute “elementFormDefault” set to “qualified”. This results into a CDM of which

  • it is easy to copy the elements that are needed, including child elements and necessary types, from the design time CDM to the runtime constituents of the software component being developed. Do not forget to apply the component specific target namespace to this copy.
  • it is possible to reuse type definitions within the CDM itself, preventing multiple definitions of the same entity.

In my next blogpost, part III about runtime dependencies and interface tailoring, I explain why you should go in most cases for a design time CDM and not a central runtime CDM.

The post Development and Runtime Experiences with a Canonical Data Model Part II: XML Namespace Standards appeared first on AMIS Oracle and Java Blog.

Development and Runtime Experiences with a Canonical Data Model Part III: Dependency Management & Interface Tailoring

Amis Blog - Wed, 2017-03-29 12:20
Introduction

This blogpost is part III, the last part of a trilogy on how to create and use a Canonical Data Model (CDM). The first blogpost contains part I in which I share my experiences in developing a CDM and provide you with lots of standards and guidelines for creating a CDM. The second part is all about XML Namespace Standards. This part is about usage of a CDM in the integration layer, thus how to use it in a run time environment and what are the consequences for the development of the services which are part of the integration layer.

Dependency Management & Interface Tailoring

When you’ve decided to use a CDM, it’s quite tempting to use the XSD files, that make up the CDM, in a central place in the run time environment where all the services can reference to. In this way there is only one model, one ‘truth’ for all the services. However there are a few problems you run into quite fast when using such a central run time CDM.

Dependency Management

Backwards compatibility
The first challenge is to maintain backwards compatibility. This means that when there is a change in the CDM, this change is implemented in such a way that the CDM supports both the ‘old’ data format, according to the CDM before the change, as well as the new data format with the change. When you’re in the development stage of the project, the CDM will change quite frequently, in large projects even on a daily basis. When these changes are backwards compatible, the services which already have been developed and are considered as finished, do not need to change (unless of course the change also involves a functional change of a finished service). Otherwise, when these changes are not backwards compatible, all software components, so all services, which have been finished have to be investigated whether they are hit by the change. Since all services use the same set of central XSD definitions, many will be hit by a change in these definitions.
If you’re lucky you have nice unit tests or other code analysis tools you can use to detect this. You may ask yourself if these test and/or tool will cover a 100% hit range. When services are hit, they have to be modified, tested and released again. To reduce maintenance and rework of all finished services, there will be pressure to maintain backwards compatibility as much as possible.
Maintaining backwards compatibility in practice means

  • that all elements that are added to the CDM have to be optional;
  • That you can increase the maximum occurrence of an element, but never reduce it;
  • That you can make mandatory elements optional, but not vice versa;
  • And that structure changes are much more difficult.

For example, when a data element has to be split up into multiple elements. Let’s take a product id element of type string and split it up into a container elements that is able to contain multiple product identifications for the same product. The identification container element will have child elements for product id, product id type and an optional owner id for the ‘owner’ of the identification (e.g. a customer may have his own product identification). One way of applying this change and still maintain backwards compatibility is by using an XML choice construction:

<complexType name="tProduct">
  <sequence>
    <choice minOccurs="0" maxOccurs="1">
      <element name="Id" type="string" />
      <element name="Identifications">
        <complexType>
          <sequence>
            <element name="Identification" minOccurs="0" maxOccurs="unbounded">
              <complexType>
                <sequence>
                  <element name="Id" type="string" />
                  <element name="IdType" type="string" />
                  <element name="IdOwner" type="string" minOccurs="0"/>
                </sequence>
              </complexType>
            </element>
          </sequence>
        </complexType>
      </element>
    </choice>
    <element name="Name" type="string" />
    ...
  </sequence>
</complexType>

There are other ways to implement this change and remain backwards compatible, but they will all will into a redundant and verbose data model. As you can imagine, this soon results in a very ugly CDM, which is hard to read and understand.

Hidden functional bugs
There is another danger. When keeping backward compatibility in this way, the services which were finished technically don’t break and still run. But they might functional break! This break is even more dangerous because it may not be visible immediately and it can take quite a long time before this hidden functional bug is discovered. Perhaps the service already runs in a production environment and execute with unnoticed functional bugs!
Take the example above and consider that there has already been a service developed which does something with orders. Besides order handling, it also sends the product id’s in an order to a CRM system, but only for the product id’s in the range 1000-2000. The check in the service on the product id being in the range 1000-2000 will be based upon the original product id field. But what happens if the CDM is changed as described in previous paragraph, so the original product id field is part of a choice and thus becomes optional. This unchanged service now might handle orders that contain products with the newer data definition for a product in where the new “Identification” element is used instead of the old “Id” element. If you’re lucky, the check on the range fails with a run time exception! Lucky, because you’re immediately notified of this functional flaw. It probably will be detected quite early in a test environment when it’s common functionality. But what if it is rare functionality? Then the danger is that it might not be detected and you end up with a run time exception in a production environment. That is not what you want, but at least it is detected!
The real problem is that there is a realistic chance that the check doesn’t throw an exception and doesn’t log an error or warning. It might conclude that the product id is not in the range 1000-2000, because the product id field is not there, while the product identification is in that range! It just uses the new way of data modeling the product identification with the new “Identification” element. This results into a service that has a functional bug while it seems to run correctly!

Backward compatibility in time
Sometimes you have no choice and you have to make changes which are not backward compatible. This can cause another problem: you’re not backwards compatible in time. You might be developing newer versions of services. But what if in production there is a problem with one of these new services using the new CDM and you want to go back to a previous version of that service? You have to go back to the old version of the CDM as well, because the old version is not compatible with the new CDM. But that also means that none of the newer services can run, because they depend on the new CDM. So you have to revert to the old versions for all of the new services using the new CDM!

The base cause of these problems is that all software components (service) are dependent on the central run time CDM!
So this central run CDM introduces dependencies between all (versions of) components. This heavily conflicts with one of the base principles of SOA: loose coupled, independent services.

 

Interface Tailoring

There is another problem with a central CDM which has more to do with programming concepts, but also impacts the usage of services resulting in a slower development cycle. The interface of a service which is described in its contract (WSDL) should reflect the functionality of a service. However, if you’re using a central CDM, the CDM is used by all the services. This means that the entities in the CDM contain all the data elements which are needed in the contracts of all the services. So basically a CDM entity consists of a ‘merge’ of all these data elements. The result is that the entities will be quite large, detailed and extensive. The services use these CDM entities in their contracts, while functionally only a (small) part of the elements are used in a single service.

This makes the interface of a service very unclear, ambiguous and meaningless.

Another side effect is that it makes no sense to validate (request) messages, because all elements will be optional.

Take for example a simple service that returns the street and city based upon the postal code and house number (this is a very common functionality in The Netherlands). The interface would be nice and clear and almost self-describing when the service contract dictates that the input (request) only is a postal code and the output (response) only contains the street name and the city. But with a central CDM, the input will be an entity of type address, as well as the output. With some bad luck, the address entity also contain all kind of elements for foreign addresses, post office boxes, etc. I’ve seen exactly this example in a real project with an address entity containing more than 30 child elements! While the service only needed four of them: two elements, postal code and house number, as input and also two elements, street and city, as the output. You might consider to this by defining these separate elements as input and output and not to use the entity element. But that’s not the idea of a central CDM! Take notice that this is just a little example. I’ve seen this problem in a project with lawsuit entities. You can imagine how large such an entity can become, with hundreds of elements. Services individually only used some of the elements of the lawsuit entity, but these elements were scattered across the entire entity. So is does not help either to split up the type definition of a lawsuit entity into several sub types. In that project almost all the services needed one or more lawsuit entities resulting in interface contracts (WSDL) which all were very generic and didn’t make sense. You needed the (up to date) documentation of the service in order to know which elements you had to use in the input and which elements were returned as output, because the definitions of the request and response messages were not useful as they contained complete entities.

Solution

The solution to both of the problems described above, is not to use a central run time CDM, but only a design time CDM.
This design time CDM has no namespace (or a dummy one). When a service is developed, a hard copy is made of (a part of) the CDM at that moment to a (source) location specific for that service. Then a service specific namespace has to be applied to this local copy of the (service specific) CDM.
And now you can shape this local copy of the CDM to your needs! Tailor it by removing elements that the service contract (WSDL) doesn’t need. You can also apply more restrictions to the remaining elements by making optional elements mandatory, reduce the maximum occurrences of an element and even create data value restrictions for an element (e.g. set a maximum string length). By doing this, you can tailor the interface in such a way that it reflects the functionality of the service!
You can even have two different versions of an entity in this copy of the CDM. For example one to use in the input message and one in the output message.
Let’s demonstrate this with the example of above: An address with only postal code and house number for the input message and an address with street and city for the output message. The design time CDM contains the full address entity, while the local and tailored copy of the service CDM contains two tailored address entities. And this one can be used by the service XSD which contains the message definitions of the request and response payloads:

CDM XSD and Service XSD

CDM XSD and Service XSD

You can expand the source code if you are interested:

<schema targetNamespace="DUMMY_NAMESPACE"
            xmlns="http://www.w3.org/2001/XMLSchema" 
            version="1.0">

   <complexType name="TAddress">
      <sequence>
         <element name="Department" type="string" minOccurs="0"/>
         <element name="Street" type="string" minOccurs="0"/>
         <element name="Number" type="string" minOccurs="0"/>
         <element name="PostalCode" type="string" minOccurs="0"/>
         <element name="City" type="string" minOccurs="0"/>
         <element name="County" type="string" minOccurs="0"/>
         <element name="State" type="string" minOccurs="0"/>
         <element name="Country" type="string" minOccurs="0"/>
      </sequence>
   </complexType>
   
</schema>
<schema targetNamespace="http://nl.amis.AddressServiceCDM"
            xmlns="http://www.w3.org/2001/XMLSchema" 
            version="1.0">

   <complexType name="TAddressInput">
      <sequence>
         <element name="Number" type="string" minOccurs="0"/>
         <element name="PostalCode" type="string" minOccurs="1"/>
      </sequence>
   </complexType>

   <complexType name="TAddressOutput">
      <sequence>
         <element name="Street" type="string" minOccurs="1"/>
         <element name="City" type="string" minOccurs="1"/>
      </sequence>
   </complexType>
   
</schema>
<schema targetNamespace="http://nl.amis.AddressService"
        xmlns="http://www.w3.org/2001/XMLSchema" 
        xmlns:cdm="http://nl.amis.AddressServiceCDM" 
        version="1.0">

   <import namespace="http://nl.amis.AddressServiceCDM" schemaLocation="AddressServiceCDM.xsd"/>

   <element name="getAddressRequest">
	   <complexType>
		  <sequence>
			 <element name="Address" type="cdm:TAddressInput" minOccurs="1"/>
		  </sequence>
	   </complexType>
   </element>

   <element name="getAddressResponse">
	   <complexType>
		  <sequence>
			 <element name="Address" type="cdm:TAddressOutput" minOccurs="1"/>
		  </sequence>
	   </complexType>
   </element>
   
</schema>

When you’re finished tailoring, you can still deploy these service interfaces (WSDL) containing the shaped data definitions (XSDs) to a central run time location. However each service must have its own location within this run time location, to store these tailored data definitions (XSDs). When you do this, you can also store the service interface (abstract WSDL) in there as well. In this way there is only one copy of a service interface, that is used by the implementing service as well as by consuming services.
I’ve worked in a project with SOAP services where the conventions dictated that the filename of a WSDL is the same as the name of the service. The message payloads were not defined in this WSDL, but were included from an external XSD file. This XSD also had the same filename as the service name. This service XSD defined the payload of the messages, but it did not contain CDM entities or CDM type definitions. They were included from another XSD with the fixed name CDM.xsd. This local, service specific, CDM.xsd contained the tailored (stripped and restricted) copy of the central design time CDM, but had the same target namespace as the service.wsdl and the service.xsd:
Service Files
This approach also gave the opportunity to add operation specific elements to the message definitions in the service.xsd. These operation specific elements were not part of the central CDM and did not belong there due to their nature (operation specific). These operation specific elements ware rarely needed, but when needed, they did not pollute the CDM, because you don’t need to somehow add them to the CDM. Think of switches and options on operations which act on functionality, e.g. a boolean type element “includeProductDescription” in the request message for operation “getOrder”.

Note: The services in the project all did use a little generic XML of which the definition (XSD) was stored in a central run time location. However these data definitions are technical data fields and therefor are not part of the CDM. For example header fields that are used for security, a generic response entity containing messages (error, warning info) and optional paging information elements in case a response contains a collection. You need a central type definition when you are using generic functionality (e.g. from a software library) in all services and consuming software.

Conclusion
With this approach of a design time CDM and tailored interfaces:

  • There are no run time dependencies on the CDM and thus no dependencies between (versions of) services
  • Contract breach and hidden functional bugs are prevented. (Because of different namespaces services have to copy each data element individually when passing an entity or part of an entity, to its output)
  • Service interfaces reflect the service functionality
  • Method specific parameters can be added without polluting the CDM
  • And – most important – the CDM can change without limitations and as often as you want to!

The result is that the CDM in time will grow to a nice clean and mature model that reflects the business data model of the organization – while not impending and even promoting the agility of service development. And that is exactly what you want with a CDM!

 

When to use a central run time CDM

A final remark about a central run time CDM. There are situations where this can be a good solution. That is for smaller integration projects and in the case when all the systems and data sources which are to be connected with the integration layer are already in place, so they are not being developed. They probably already run in production for a while.
This means that the data and the data format which has to be passed through the integration layer and is used in the services is already fixed. You could state that the CDM already is there, although it still has to be described, documented in a data model. It’s likely that it’s also a project where there is a ‘one go’ to production, instead of frequent delivery cycles.
When after a while one system is replaced by another system or the integration layer is extended by connecting one or more systems and this results that the CDM has to be changed, you can add versioning to the CDM. Create a copy of the existing CDM and give it a new version (e.g. with a version number in the namespace) and you can make the changed in CDM which are needed. This is also a good opportunity to clean up the CDM by removing unwanted legacy due to keeping backwards compatibility. Use this newer version of the CDM for all new development and maintenance of services.
Again, only use this central run time CDM for smaller projects and when it is a ‘one go’ to production (e.g. replacement of one system). As soon as the project becomes larger and/or integration of systems keeps on going, switch over to the design time CDM approach.
You can easily switch over by starting to develop the new services with the design time CDM approach and keep the ‘old’ services running with the central run time CDM. As soon there is a change in an ‘old’ service, refactor it to the new approach of the design time CDM. In time there will be no more services using the run time CDM, so the run time CDM can be removed.

After reading this blogpost, together with the previous two blogpost which make up the trilogy about my experiences with a Canonical Data Model, you should be able to have good understanding about how to set up a CDM and use it in your projects. Hopefully it helps you in making valuable decisions about creating and using a CDM and your projects will benefit from it.

The post Development and Runtime Experiences with a Canonical Data Model Part III: Dependency Management & Interface Tailoring appeared first on AMIS Oracle and Java Blog.

Webcast: "12.2 Technical Upgrade Overview and Process Flow"

Steven Chan - Wed, 2017-03-29 10:08

EBS 12.2 upgrade webcastOracle University has a wealth of free webcasts for Oracle E-Business Suite.  If you're looking for an overview of how to optimize your EBS 12.2 installation, see:

Udayan Parvate, Senior Director Release Engineering, Quality and Release Management, shares a high level overview of the 12.2 technical upgrade and the sequence of technical steps to follow in the 12.2 upgrade process. This material was presented at Oracle OpenWorld 2015.

Categories: APPS Blogs

Webcast: "12.2 Technical Upgrade Overview and Process Flow"

Steven Chan - Wed, 2017-03-29 10:08

EBS 12.2 upgrade webcastOracle
University has a wealth of free webcasts for Oracle E-Business Suite. 
If you're looking for an overview of how to optimize your EBS 12.2 installation, see:

Udayan Parvate, Senior Director Release Engineering, Quality and Release Management, shares a high level overview of the 12.2 technical upgrade and the sequence of technical steps to follow in the 12.2 upgrade process. This material was presented at Oracle OpenWorld 2015.

 

Categories: APPS Blogs

Oracle Unveils Industry-First Cloud Converged Storage to Help Organizations Bridge On-Premises and Oracle Cloud Storage

Oracle Press Releases - Wed, 2017-03-29 10:00
Press Release
Oracle Unveils Industry-First Cloud Converged Storage to Help Organizations Bridge On-Premises and Oracle Cloud Storage Organizations Can Seamlessly Merge Oracle Storage Cloud Functionality and Economics with the Power of High Performance Oracle ZFS Storage Appliances

Redwood Shores, Calif.—Mar 29, 2017

Oracle today unveiled the industry’s first Cloud Converged Storage, representing the first time a public cloud provider at scale has integrated its cloud services with its on-premises, high performance NAS storage systems. Oracle ZFS Cloud software, included in the latest Oracle ZFS Storage Appliance release, enables organizations to easily and seamlessly move data and/or applications to the cloud to optimize value and savings, while eliminating the need for external cloud gateways and avoiding the costs of software licenses and cloud access licenses—AKA “cloud entrance taxes”—charged by legacy on-premises vendors for the right to access the public cloud from their infrastructure platforms. As an example, Oracle’s total cost of ownership versus one industry competitor was 87 percent less.*

Oracle’s approach removes the burden on users to do their own on-premises to public cloud integration, manage environments comprised of different security requirements, support teams, industry standards, and skill sets, as well as the struggle with end-to-end visibility, diagnostics and support. Oracle is, in fact, the only company that can bring the two worlds together as one co-engineered solution. On-premises NAS storage providers cannot offer this level of convergence and economic benefits as they lack a public cloud, and public cloud providers lack on-premises high-performance NAS storage systems.

“With its ZFS Cloud, Oracle simultaneously challenges not only public cloud providers that cannot deliver on-premises, high-performance storage systems, but also traditional hardware vendors that lack truly integrated public clouds,” said Mark Peters, Practice Director & Senior Analyst, Enterprise Strategy Group. “Oracle is delivering business value with a genuine hybrid data ability with a ‘cloud insurance option’ built right into the storage system, significantly streamlining users’ experiences.”

“Cloud is forcing IT practitioners to rethink their organization’s infrastructure to accommodate current technology while future-proofing their business for tomorrow,” said Steve Zivanic, Vice President, Storage, Converged Infrastructure, Oracle. “By converging the Oracle ZFS Storage Appliances with Oracle Storage Cloud, organizations benefit from the highest performing storage systems for their on-premises needs, while seamlessly extending them to Oracle Cloud resources when necessary. Oracle ZFS Cloud is the unifying enabler that helps customers bridge the gap between their current infrastructure and plans for broader public cloud adoption.”

The convergence of the company’s Oracle Storage Cloud with its high-performance Oracle ZFS Storage Appliances—the storage foundation for Oracle Public Cloud and IT with over 1 exabyte installed—empowers users with the performance of flash and the agility, simplicity and elastic scaling of the Oracle Storage Cloud. Oracle customers can use Cloud Converged Storage for elastic application storage, back-up and recovery, development, testing, active archive storage, snapshot replica storage, Dev Ops with a single API for both on-premises and in the Oracle Storage Cloud, and lift-and-shift workload migration. Modern applications can leverage data both in on-premises high performance Oracle ZFS Storage Appliances and in the Oracle Storage Cloud without any application changes.

The latest update also includes a series of Oracle ZFS Storage Appliance innovations that extend Oracle Database dynamic automation capabilities and increase Database Administrator productivity by 10X as well as add all-flash pools to accelerate critical business applications. Enhancements include the following:

  • Oracle Intelligent Storage Protocol 2.0 Delivers Next Generation Automation for Oracle Database and Oracle ZFS Storage Appliance: brings new capabilities that increase Oracle Database performance, decrease manual storage tuning through automation, and simplify database storage optimization. Developed collaboratively between the Oracle Database and Oracle ZFS Storage Appliance engineering teams, it prioritizes IOs based on database hints, effectively increases online transaction processing per minute by up to 19-percent, Oracle Database RMAN backup performance by up to 33 percent, latency-sensitive control file IO performance by up to 13x, and log writer IO performance by up to 3.9x—all without database or storage administrator intervention.
  • All-Flash Storage Pools Improve Application Performance: flash storage pools that boost performance for any application are now available and offer seamless scalability from 16TB to 2.4PB of capacity and over 34GB/second of throughput, regardless of access protocol. These systems accelerate database performance, deliver low latency for critical applications while reducing energy resources and needed datacenter space.
  • Cloud-Scale Data Protection: Oracle ZFS Storage Appliance delivers market-leading data backup, with over 62TB per hour, and data restore, with over 60TB per hour, performance over Infiniband, Ethernet, and Fibre Channel. This latest software release extends Oracle’s leadership in data protection with innovations in data reduction and data mobility. New data reduction technology reduces the backup storage footprint by up to 9x and bandwidth demands by up to 4x with advancements in deduplication and compression. In addition, enhanced data mobility technology increases the amount of data that can be securely distributed to multiple locations by up to 2x with advancements in replication, intelligent data reduction, and remote data distribution security.

Oracle ZFS Storage Appliance is trusted by the world’s leading financial services companies, telcos, semiconductor companies, oil & gas and media & entertainment companies for its extreme performance and massive levels of sustained bandwidth for data-intensive applications in the cloud and on-premises.

Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data.  The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

For more information, please visit us at http://cloud.oracle.com.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

*A 5-year TCO comparison of one petabyte of on-premises capacity and two petabytes of cloud public storage is 87% less with ZFS Backup Appliance and Oracle Storage Cloud versus Dell EMC Data Domain and its cloud licensing scheme.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Kristin Reeves

  • +1.415.856.5145

Blue Shield of California, Family Health Network, and Health Car

Oracle Press Releases - Wed, 2017-03-29 10:00
Press Release
Blue Shield of California, Family Health Network, and Health Car Complete cloud solutions help healthcare insurers deliver affordable healthcare through improved financial functions

Redwood Shores, Calif.—Mar 29, 2017

Oracle today announced that Blue Shield of California, Family Health Network (FHN), and Health Care Service Corporation (HCSC) have selected Oracle Cloud Applications to help them provide healthcare coverage at an affordable cost by digitizing and modernizing their financial, planning and budgeting systems, and business processes. With Oracle Cloud solutions for finance, these organizations have the tools needed to be able to achieve the speed and flexibility required to support and scale with the healthcare industry’s rapidly changing needs.
 
Oracle Enterprise Resource Planning (ERP) Cloud help organizations simplify and streamline operations with increased visibility and insights into financial and operations. By reducing IT complexity and costs, organizations can also increase productivity, by freeing employees to help provide their insured members with the best and most affordable health insurance plans.
 
Health Insurance Organizations Move to the Cloud
 
Blue Shield of California chose Oracle ERP Cloud to support more than four million health plan members and nearly 65,000 physicians. The move to Oracle ERP Cloud has allowed Blue Shield of California to streamline hundreds of application interfaces and reports, which has stabilized the monthly close. With Oracle ERP Cloud, Blue Shield of California expects improved financial analysis to enable cost savings for policy holders.
 
“When we looked at transforming our technology, we observed our competitors moving to the cloud, and for good reason,” said Michael Sheils, VP, corporate shared services fin.  “After a careful evaluation, we selected Oracle as our partner because of their commitment to innovation in the cloud, their expertise in the health payer space, and the breadth and depth of the ERP Cloud offering.”
 
Improved financial efficiency was the primary driver for Family Heath Network to select Oracle ERP Cloud. A not-for-profit provider sponsored health plan in the Chicago area, FHN is one of the largest managed care plans in the region under the Medicaid Family Health Plan. In evaluating a new financial system, FHN decided a move to the cloud was the best approach in order to best streamline operations and upgrades.
 
“In insurance, clear finance reporting is critical in order to process claims as quickly as possible,” said Nirav Shah, vice president of finance at Family Health Network. “Oracle ERP Cloud is a valuable tool for our company, streamlining our financial operations, while also maximizing business performance and growth.”
 
As changes in the financial reporting structure and lengthening close times began to impact HCSC, they recognized that cloud technology could help them to remain agile and quickly react to future changes. The largest customer-owned health insurer in the United States, HCSC selected Oracle ERP Cloud to help transform its financial systems and support its 22,000 employees serving more than 15 million members across five states, with the functionality, breadth, and depth of the Oracle Cloud.
 
“We evaluated vendors based on several drivers, including strategic fit, capability, maturity, vendor commitment and support, employee feedback, and implementation considerations,” said James Kadela, SVP controller at HCSC. “Oracle ERP Cloud was the natural choice based on the nature of HCSC’s financial systems. In addition to its current capabilities, Oracle is heavily investing in research and development and we feel confident that Oracle ERP Cloud will evolve with us to support our long-term business strategy.”
 
“Oracle is committed to supporting the healthcare industry with our complete, secure, and modern set of enterprise-grade cloud applications that help healthcare payers provide affordable health insurance to their communities,” said Terrance Wampler, vice president of financials applications strategy at Oracle. “Our connected, best of breed ERP Cloud solutions are uniquely placed to cater to dynamic business environments and enable our customers to modernize core financial operations, empower its people, and support growth.”
 
Oracle Cloud delivers the industry’s broadest suite of enterprise-grade cloud services, including software as a service (SaaS), platform as a service (PaaS), infrastructure as a service (IaaS), and data as a service (DaaS).
 
Contact Info
Joann Wardrip
Oracle
+1.650.607.1343
joann.wardrip@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Get Started
Talk to a Press Contact

Joann Wardrip

  • +1.650.607.1343

OUD – Oracle Unified Directory 11.1.2.3 Backups Tips and Tricks

Yann Neuhaus - Wed, 2017-03-29 09:51

Backing up an OUD consists of several components

  • The OUD software itself
  • The OUD back end data
  • The OUD configuration
  • The OUD logical export as a ldif file

However, in this post I would like to take a closer look at the back end data. Unlike the old OID, the OUD directory server uses the Berkeley DB Java Edition (JE) as its primary back end. The OUD backup command allows you to backup all back ends in one shot, or a single back end, you can do full or incremental backups, you can compress it and you can even encrypt your back end data, if you like too.

One of the first questions that comes up is where to put the backup files. In a replicated environment, it makes a lot of sense to put them on a NFS share. In case you should loose one OUD host, you still have the access to backups on the other host.

I choose to backup the back end data to /u99/backup/OUD, which is a directory on a NFSv4 mount.

[dbafmw@dbidg01 OUD]$ mount | grep u99
dbidg03:/u99 on /u99 type nfs4 (rw,relatime,vers=4.1,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.201,local_lock=none,addr=192.168.56.203)

Before we issue the first backup command, it is good to know which back ends we have. Some back ends change quite frequently and others might not. I am using the OUD only for TNS name resolution, so, the most important for me is the OracleContext0.

[dbafmw@dbidg01 ~]$ list-backends
Backend ID        : Base DN
------------------:----------------------------------------
Eus0              :
EusContext0       :
Fa0               :
OIDCompatibility  : cn=OracleContext,cn=OracleSchemaVersion
OracleContext0    : "cn=OracleContext,dc=dbi,dc=com"
adminRoot         : cn=admin data
ads-truststore    : cn=ads-truststore
backup            : cn=backups
monitor           : cn=monitor
schema            : cn=schema
subschemasubentry :
tasks             : cn=tasks
userRoot          : "dc=dbi,dc=com"
virtualAcis       : cn=virtual acis


[dbafmw@dbidg01 ~]$ list-backends -n OracleContext0
Backend ID     : Base DN
---------------:---------------------------------
OracleContext0 : "cn=OracleContext,dc=dbi,dc=com"

Ok. Lets start a full backup of all back ends to the backup directory /u99/backup/OUDand and compress them.

[dbafmw@dbidg01 ~]$ backup --backUpAll --compress --backupDirectory=/u99/backup/OUD
[29/Mar/2017:08:55:49 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend virtualAcis
[29/Mar/2017:08:55:49 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:55:49 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend OracleContext0
[29/Mar/2017:08:55:49 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:55:49 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend tasks
[29/Mar/2017:08:55:49 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend schema
[29/Mar/2017:08:55:49 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend OIDCompatibility
[29/Mar/2017:08:55:49 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:55:49 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend userRoot
[29/Mar/2017:08:55:49 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:55:49 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend replicationChanges
[29/Mar/2017:08:55:49 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:55:49 +0200] category=TOOLS severity=NOTICE msgID=10944795 msg=The backup process completed successfully

For backing up your OUD server back ends, the OUD itself does not have to be up and running. You can back it up while it is offline too.

[dbafmw@dbidg01 ~]$ stop-ds
Stopping Server...

[29/Mar/2017:08:57:46 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=OIDCompatibility,cn=Workflow Elements,cn=config is now taken offline
[29/Mar/2017:08:57:46 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=OracleContext0,cn=Workflow elements,cn=config is now taken offline
[29/Mar/2017:08:57:46 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=userRoot,cn=Workflow Elements,cn=config is now taken offline
[29/Mar/2017:08:57:46 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=virtualAcis,cn=Workflow Elements,cn=config is now taken offline
[29/Mar/2017:08:57:46 +0200] category=CORE severity=NOTICE msgID=458955 msg=The Directory Server is now stopped


[dbafmw@dbidg01 ~]$ backup --backUpAll --compress --backupDirectory=/u99/backup/OUD
[29/Mar/2017:08:58:06 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend virtualAcis
[29/Mar/2017:08:58:06 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:58:06 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend OracleContext0
[29/Mar/2017:08:58:06 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:58:06 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend tasks
[29/Mar/2017:08:58:06 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend schema
[29/Mar/2017:08:58:06 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend OIDCompatibility
[29/Mar/2017:08:58:06 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:58:06 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend userRoot
[29/Mar/2017:08:58:06 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:58:06 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend replicationChanges
[29/Mar/2017:08:58:06 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:08:58:06 +0200] category=TOOLS severity=NOTICE msgID=10944795 msg=The backup process completed successfully
[dbafmw@dbidg01 ~]$

Backing up a single back end is done by the following command.

[dbafmw@dbidg01 ~]$ backup --backendID OracleContext0 --compress --backupDirectory=/u99/backup/OUD
[29/Mar/2017:15:14:22 +0200] category=TOOLS severity=NOTICE msgID=10944792 msg=Starting backup for backend OracleContext0
[29/Mar/2017:15:14:22 +0200] category=JEB severity=NOTICE msgID=8847446 msg=Archived: 00000000.jdb
[29/Mar/2017:15:14:22 +0200] category=TOOLS severity=NOTICE msgID=10944795 msg=The backup process completed successfully

The backup which I have done is reflected now in the following directory.

[dbafmw@dbidg01 OUD]$ ls -rtl /u99/backup/OUD/backup-OracleContext0*
-rw-r--r-- 1 dbafmw oinstall 19193 Mar 28 15:11 /u99/backup/OUD/backup-OracleContext0-20170328131137Z
-rw-r--r-- 1 dbafmw oinstall 56904 Mar 28 15:20 /u99/backup/OUD/backup-OracleContext0-20170328132004Z
-rw-r--r-- 1 dbafmw oinstall 27357 Mar 29 15:14 /u99/backup/OUD/backup-OracleContext0-20170329131419Z
-rw-r--r-- 1 dbafmw oinstall 27357 Mar 29 15:15 /u99/backup/OUD/backup-OracleContext0-20170329131552Z
-rw-r--r-- 1 dbafmw oinstall 84556 Mar 29 15:16 /u99/backup/OUD/backup-OracleContext0-20170329131622Z

The backups done via the OUD backup command are documented in a file called backup.info. If you grep for the last backup piece, you will find it there as the last entry.

[dbafmw@dbidg01 OUD]$ cat /u99/backup/OUD/backup.info | grep -B 8 backup-OracleContext0-20170329131622Z

backup_id=20170329131622Z
backup_date=20170329131625Z
incremental=false
compressed=false
encrypted=false
property.last_logfile_name=00000000.jdb
property.last_logfile_size=84330
property.archive_file=backup-OracleContext0-20170329131622Z

There is another method to find the last backup which was done. Before the backup command starts, it copies the current backup.info to backup.info.save, and so, you just need to do a diff, and then you know which is the latest backup.

[dbafmw@dbidg01 OUD]$ diff backup.info backup.info.save
48,56d47
< backup_id=20170329131622Z
< backup_date=20170329131625Z
< incremental=false
< compressed=false
< encrypted=false
< property.last_logfile_name=00000000.jdb
< property.last_logfile_size=84330
< property.archive_file=backup-OracleContext0-20170329131622Z
<

But what happens if you don’t need some old backup anymore. e.g. the backup-OracleContext0-20170328131137Z.

[dbafmw@dbidg01 OUD]$ cat backup.info | grep backup-OracleContext0-20170328131137Z
property.archive_file=backup-OracleContext0-20170328131137Z

Unfortunately, there is no purge procedure delivered with OUD to clean up old backups. You have to clean it up yourself. e.g. in case you want to clean up Oracle Context backups older than 2 days, you could do like this.

[dbafmw@dbidg01 OUD]$ find /u99/backup/OUD -maxdepth 1 -type f -name "backup-OracleContext0*" -mtime +2 | awk -F "/" '{ print $5 }' | awk -F "-" '{ print $3 }'
20170328132004Z

[dbafmw@dbidg01 OUD]$ find /u99/backup/OUD -maxdepth 1 -type f -name "backup-OracleContext0*" -mtime +2 | awk -F "/" '{ print $5 }' | awk -F "-" '{ print $3 }' | while read i
do
echo /u99/backup/OUD/backup-OracleContext0-${i}
rm /u99/backup/OUD/backup-OracleContext0-${i}
sed -i "/backup_id=${i}/,/property.archive_file=backup-OracleContext0-${i}/d" /u99/backup/OUD/backup.info
done
[dbafmw@dbidg01 OUD]$ cat backup.info | grep 20170328132004Z
[dbafmw@dbidg01 OUD]$

This script is of course not baby save, but you got the idea. ;-)

Conclusion

The Oracle OUD delivers quite a lot good options regarding backups. However, regarding purging the old stuff, you have to handle it yourself.

 

 

Cet article OUD – Oracle Unified Directory 11.1.2.3 Backups Tips and Tricks est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 10 – Timezones

Yann Neuhaus - Wed, 2017-03-29 08:59

This post is inspired by a question we received from a customer: In Oracle there is the sessiontimezone which returns the time zone of the session. Asking for the time zone of the session in Oracle returns you the offset to the UTC time:

SQL> select sessiontimezone from dual;

SESSIONTIMEZONE
---------------------------------------------------------------------------
+02:00

This is fine as I am based in Switzerland and we skipped one hour in the night from last Saturday to Sunday :)

How can we do something similar in PostgreSQL? To check the current time zone of your session:

(postgres@[local]:4445) [postgres] > show timezone;
   TimeZone   
--------------
 Europe/Vaduz
(1 row)

Or:

(postgres@[local]:4445) [postgres] > select current_setting('timezone');
 current_setting 
-----------------
 Europe/Vaduz
(1 row)

So, PostgreSQL will not show you the offset to UTC but the name of the time zone as specified by the Internet Assigned Numbers Authority (IANA). When you want to have the offset to UTC you can do something like this:

(postgres@[local]:4445) [postgres] > select age(now(),now() at time zone 'UTC');
   age    
----------
 02:00:00
(1 row)

You can do it using the extract function as well:

(postgres@[local]:4445) [postgres] > select extract( timezone from now() ) / 60 /60;
 ?column? 
----------
        2
(1 row)

How can you change the session time zone? One way is to set the PGTZ environment variable before starting a new session when you use a libpq client:

postgres@centos7:/home/postgres/ [PG3] export PGTZ=Europe/Berlin
postgres@centos7:/home/postgres/ [PG3] psql postgres
psql.bin (9.6.2.7)
Type "help" for help.

(postgres@[local]:4445) [postgres] > show timezone;
   TimeZone    
---------------
 Europe/Berlin
(1 row)

The other way is to directly set it in the session:

Time: 1.048 ms
(postgres@[local]:4445) [postgres] > set time zone 'Europe/Paris';
SET
Time: 82.903 ms
(postgres@[local]:4445) [postgres] > show timezone;
   TimeZone   
--------------
 Europe/Paris
(1 row)

Of course you can also set the timezone parameter in postgresql.conf.

To get the current timestamp you can use:

(postgres@[local]:4445) [postgres] > SELECT current_timestamp;
        current_timestamp         
----------------------------------
 29-MAR-17 15:41:59.203485 +02:00
(1 row)

And finally, to calculate the current time in another time zone you can do something like this:

(postgres@[local]:4445) [postgres] > SELECT current_time AT TIME ZONE 'Europe/Zurich', current_time AT TIME ZONE 'US/Samoa';
      timezone      |      timezone      
--------------------+--------------------
 15:43:05.634124+02 | 02:43:05.634124-11
(1 row)

All the time zone names can be found in pg_timezone_names:

(postgres@[local]:4445) [postgres] > select * from pg_timezone_names;
               name               | abbrev | utc_offset | is_dst 
----------------------------------+--------+------------+--------
 America/North_Dakota/Beulah      | CDT    | -05:00:00  | t
 America/North_Dakota/Center      | CDT    | -05:00:00  | t
 America/North_Dakota/New_Salem   | CDT    | -05:00:00  | t
 America/Argentina/Ushuaia        | ART    | -03:00:00  | f
...

Hope this helps…

 

Cet article Can I do it with PostgreSQL? – 10 – Timezones est apparu en premier sur Blog dbi services.

Take a few minutes to patch Oracle APEX 5.1

Dimitri Gielis - Wed, 2017-03-29 08:00
Yesterday a first patch set of Oracle Application Express (APEX) 5.1 has been made available to download.

one-of patches

If you encounter issues, you can ask for support and most likely a bit later a patch is made available through support.oracle.com. The APEX team is doing a great job with this.

For example some people using APEX Office Print had an issue which was caused by a bug in APEX_JSON (which we heavily use behind the scenes). The next day the APEX Dev Team already made a patch available (PSE 25650850).

patch set

Instead of applying those one-off patches, you can wait for a patch set which includes those one-off patches and more. If you didn't move to Oracle APEX 5.1 yet, you can just download the latest version which includes 5.1.1 immediately. 

There're many fixes for the Interactive Grid features, but next to that, many others as well, like for example login issues.

applying the patch set

If you're on Oracle APEX 5.1, search for patch 25341386. Unzip the file, stop the webserver, run @apxpatch, copy the images folder and start the webserver again.
About 2 minutes later you're on the latest version. 


Happy patching...
Categories: Development

New Certifications Demonstrate Continued Momentum for Oracle Public Cloud

Oracle Press Releases - Wed, 2017-03-29 07:00
Press Release
New Certifications Demonstrate Continued Momentum for Oracle Public Cloud Cloud compliance certifications further validate Oracle Cloud in highly regulated sectors

Redwood Shores Calif—Mar 29, 2017

Oracle today announced it has achieved a series of compliance certifications and attestations for its Public Cloud offering, including certifications and attestations for ISO 27001, HIPAA, SOC1 and SOC2 for a number of core services. Administered by Schellman & Co., these certifications in industries such as healthcare help provide validation of Oracle’s offerings in areas including security, availability, processing integrity and privacy.
 
Oracle’s portfolio of Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) solutions received Service Organization Control (SOC) attestations for the following key services: Database Public Cloud Service, Java Public Cloud Service, Database Backup Cloud Service, Exadata Cloud Service, Big Data Cloud Service, Big Data Preparation Service, Big Data Discovery, Application Builder Cloud Service, Storage Cloud Service, Dedicated Compute Cloud Service, and Public Compute Cloud Service. These SOC certifications confirm Oracle’s compliance with international service organization reporting standards. SOC reports are standards that help organizations establish trust and confidence in their service delivery processes by assessing whether service organizations are performing their duties appropriately in a controlled, stable and secured environment.
 
Oracle recently received a Health Insurance Portability and Accountability Act (HIPAA) attestation for its Oracle Fusion Suite of Software-as-a-Service (SaaS) applications—including Enterprise Resource Planning (ERP), Human Capital Management (HCM), and Customer Relationship Manager (CRM) Cloud Service—demonstrating that its SaaS solutions meet the requirements established by the U.S. Department of Health and Human Services for organizations working in the healthcare industry. HIPAA attestations affirm the proper saving, accessing and sharing of individual medical and personal information, as well as compliance with national security standards to protect health data created, received, maintained or transmitted electronically.
 
Oracle recently received an International Standards Organization (ISO) 27001 certification demonstrating the proper management and security of assets such as financial information, intellectual property, employee details or information entrusted to an organization by third parties, for its Public Cloud SaaS suite of services in the core areas of Fusion ERP, HCM, CRM, Taleo Social, Taleo Business Edition, Service Cloud, Eloqua Marketing Cloud, BigMachines CPQ, and Field Service Cloud. 
 
 
“Oracle is continuously investing time and resources to meet our customers’ strict requirements across highly regulated industries,” said Erika Voss, Global Senior Director, Public Cloud Compliance, Risk and Privacy, Oracle. “These new certifications not only validate the reliability and security features of the Oracle Cloud; they effectively make Oracle’s solutions available to thousands of new customers in the Healthcare and Public Sector industries.”
 
Contact Info
Scott Thornburg
Oracle
+1.415.816.8844
scott.thornburg@oracle.com
About Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data. The Oracle Cloud delivers nearly 1,000 SaaS applications, and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world supporting 55 billion transactions each day. For more information, please visit us at http://cloud.oracle.com.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Scott Thornburg

  • +1.415.816.8844

pl/sql code debugging - long running block

Tom Kyte - Wed, 2017-03-29 05:06
hi - our developers have a package that they are running within a begin... end block. They are telling us that this block seems to run for a very long time. this is the only thing that is running in the database and using over 90% of cpu. we ran the ...
Categories: DBA Blogs

equivalent of DBMS_XMLGEN.GETXML to generate json

Tom Kyte - Wed, 2017-03-29 05:06
We are currently generating XML data based on DBMS_XMLGEN.GETXML() to send it to client. Instead of xml , we want to send data as JSON. Is there anything similar to DBMS_XMLGEN.GETXML to generate json ?
Categories: DBA Blogs

Extents Tablas Oracle

Tom Kyte - Wed, 2017-03-29 05:06
Tom, because when I create a table with the following values ??in each of its partitions, I generate 195 extents? If the partitions have an INITIAL if the size of the partition. COMPRESS BASIC STORAGE ( INITIAL 7748954362 - 7.2 GB NE...
Categories: DBA Blogs

oracle 12.2 approximate functions

Tom Kyte - Wed, 2017-03-29 05:06
When I run the new Oracle 12c approximate square root function on a negative number, I keep getting the same answer. What's going on? <code> SQL> select approximate_sqrt(-1) from dual; APPROXIMATE_SQRT(-1) ---------------------- ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator