Feed aggregator

OpenText Enterprise World Europe 2019 – Day 3

Yann Neuhaus - Thu, 2019-03-14 16:35

Last but not least, today was mainly dedicated to demos and customer cases. It started with the global stream presenting some OpenText applications like Core for Quality: An application developed with AppWorks and integrated to Magellan. It was meant to manage quality issues and connected to Documentum in order to link issues with SOP documents.

In the different demos we saw the integration of these SaaS applications in OT2 and their responsiveness (drag and drop from desktop, loading time, easy accessibility to other OT2 applications and so on).

OT2

We went to an OT2 specific session to get more info in this new way of bringing services and business to customers (and developers).

img20

OT2 is a platform of services, it can be really interesting for companies to avoid IT management on site and to deport the infrastructure management inside OT2 at OpenText charge: security, maintenance, updates and patches aso.

The main purpose is “A2A”, means Any to Any or Anywhere, anytime. OT2 hosted applications can be accessed from anywhere because it’s a public cloud. As it’s hosted by OpenText, you should expect almost no downtime, applications up to dates and most important: Security.

Core is another main feature of OpenText. It’s a secure way to share content with people outside of the company’s organization like external partners, customers (documentation sharing). The content can be edited by external people as it will be synced with your application (or backend) at all time. We saw how easy it was to share the content based on rules or just selection inside the application, and everything is taken care of by the product.

Federated Compliance will also come as a service, allowing you to track data and usage of your applications. An easy way to keep an eye on the status of your infra.

img21

Some other products where mentioned as SAP Archive Server to be brought to the cloud with the help of OpenText but we won’t focus on that point. The developers are really guided and escorted through Smart View application development and directly integrated to OT2. With this, OpenText is counting on Devs to enhance the panel of available solutions in OT2.

Documentum Stories

During the day we had the opportunity to discover some success stories from some of the Documentum customers.

Wiesbaden

Wiesbaden is a city in Germany where they came across an issue of organization in the administration sector.

img24

In this sector it’s difficult to make changes due to a recusant vision of changing habits. Dr. Thomas Ortseifen, who was presenting, told us that the administration was not well organized and each part of it was “living alone” in its proper ecosystem.

Hence, the city decided to put their trust in OpenText to bring coherence in this organization. The solution proposed by OpenText was to setup a centralized DMS (Documentum) in the middle of a SOA architecture allowing flexibility and possibility to use APIs to increase the scalability of new applications.

Here are the benefits of this solution:

  • Enhanced information flow
  • Faster, continous availability
  • Less transporting times
  • Enhanced usage of existing database
  • Enhanced processes
  • Cross-functional complex search and analysis options
  • Reduced costs for information creation
  • Reduced costs for information management
  • Reduced costs for space required
Alstom ACOMIS

Alstom is a french company managing transport solutions like Tram, metro, digital mobility, maintenance, modernisation, rails infrastructure and so on.

img25

ACOMIS stands for Alstom COntent Management Information System. At first it was setup on premise with several Web Tops and docbases.

Alstom decided to create an ACOMIS V1 in order to merge all docbases and centralize the business. To achieve this, with help of OpenText, they migrated millions of documents and merge everything to D2 and one docbase, all of this in a private cloud, letting the on premise behind.

Added business value:

  • Replacing webtop with D2 for better user experience
  • ACOMIS operated by OpenText specialists
  • One single repo for cross project searches

There was some new requirements then, and some performance issues. The need of GDPR compliance and new 3D standard format. In order to gain these features Alstrom decided to move to a V2. So they moved to the public cloud, still managed by OpenText in order to solve the perf issue (network lags). They used Brava! in order to view 3D objects in HTML 5 interface.

Added business value:

  • Public cloud for perfs and external access
  • GDPR compliance
  • Security managed by OpenText
  • Version 16.4 with Brava! integration for 3D viewer
Conclusion

The OpenText World in vienna is now closed. We met a lot of people and experts. We clearly see the trend of service and centralization from OpenText. We are exited to see where it is going.

Cet article OpenText Enterprise World Europe 2019 – Day 3 est apparu en premier sur Blog dbi services.

Slides from March AZORA meeting

Bobby Durrett's DBA Blog - Thu, 2019-03-14 16:27

Here are the slides from our March Arizona Oracle User Group (AZORA) meeting:

Daniel Morgan Security Master Class

We really appreciate Daniel Morgan taking the time to share this information about the increasingly important topic of database security.

Also, AZORA is always looking for people to present at future meetings. We have one more meeting in May before the blazing hot Arizona summer and then we start up again in September. Email be at bobby@bobbydurrettdba.com if you would like to speak at a future meeting.

Bobby

Categories: DBA Blogs

Kata Containers: An Important Cloud Native Development Trend

OTN TechBlog - Thu, 2019-03-14 15:18
Introduction

One of Oracle’s top 10 predictions for developers in 2019 was that a hybrid model that falls between virtual machines and containers will rise in popularity for deploying applications.

Kata Containers are a relatively new technology that combine the speed of development and deployment of (Docker) containers with the isolation of virtual machines. In the Oracle Linux and virtualization team we have been investigating Kata Containers and have recently released Oracle Container Runtime for Kata on Oracle Linux yum server for anyone to experiment with. In this post, I describe what Kata containers are as well as some of the history behind this significant development in the cloud native landscape. For now, I will limit the discussion to Kata as containers in a container engine. Stay tuned for a future post on the topic of Kata Containers running in Kubernetes.

History of Containerization in Linux

The history of isolation, sharing of resources and virtualization in Linux and in computing in general is rich and deep. I will skip over much of this history to focus on some of the key landmarks on the way there. Two Linux kernel features are instrumental building blocks for the Docker Containers we’ve become so familiar with: namespaces and cgroups.

Linux namespaces are a way to partition kernel resources such that two different processes have their own view of resources such as process IDs, file names or network devices. Namespaces determine what system resources you can see.

Control Groups or cgroups are a kernel feature that enable processes to be grouped hierarchically such that their use of subsystem resources (memory, CPU, I/O, etc) can be monitored and limited. Cgroups determine what system resources your can use.

One of the earliest containerization features available in Linux combine both namespaces and cgroups was Linux Containers (LXC). LXC offered a userspace interface to make the Linux kernel containment features easy to use and enabled the creation of system or application containers. Using LXC, you could run, for example, CentOS 6 and Oracle Linux 7, two completely different operating systems with different userspace libraries and versions on the same Linux kernel.

Docker expanded on this idea of lightweight containers by adding packagaging, versioning and component reuse features. Docker Containers have become widely used because they appealed to developers. They shortened the build-test-deploy cycle because they made it easier to package and distribute an application or service as a self-contained unit, together with all the libraries needed to run it. Their popularity also stems from the fact that they appeal to developers and operators alike. Essentially, Docker Containers bridge the gap between dev and ops and shorten the cycle from development to deployment.

Because containers —both LXC and Docker-based— share the same underlying kernel, it’s not inconceivable that an exploit able to escape a container could access kernel resources or even other containers. Especially in multi-tenant environments, this is something you want to avoid.

Projects like Intel® Clear Containers Hyper runV took a different approach to parceling out system resources: their goal was to combine the strong isolation of VMs with the speed and density (the number of containers you can pack onto a server) of containers. Rather than relying on namespaces and cgroups, they used a hypervisor to run a container image.

Intel® Clear Linux OS Containers and Hyper runV came together in Kata Containers, an open source project and community, which saw its first release in March of 2018.

Kata Containers: Best of Both Worlds

The fact that Kata Containers are lightweight VMs means that, unlike traditional Linux containers or Docker Containers, Kata Containers don’t share the same underlying Linux kernel. Kata Containers fit into the existing container ecosystem because developers and operators interact with them through a container runtime that adheres to the Open Container Initiative (OCI)specification. Creating, starting, stopping and deleting containers works just the way it does for Docker Containers.

Image by OpenStack Foundation licensed under CC BY-ND 4.0

In summary, Kata Containers:

  • Run their own lightweight OS and a dedicated kernel, offering memory, I/O and network isolation
  • Can use hardware virtualization extensions (VT) for additional isolation
  • Comply with the OCI (Open Container Initiative) specification as well as CRI (Container Runtime Interface) for Kubernetes
Installing Oracle Container Runtime for Kata

As I mentioned earlier, we’ve been researching Kata Containers here in the Oracle Linux team and as part of that effort we have released software for customers to expermiment with. The packages are available on Oracle Linux yum server and its mirrors in Oracle Cloud Infrastructure (OCI). Specifically, we’ve released a kata-runtime and related compontents, as well an optimized Oracle Linux guest kernel and guest image used to boot the virtual machine that will run a container.

Oracle Container Runtime for Kata relies on QEMU and KVM as the hypervisor to launch VMs. To install Oracle Container Runtime for Kata on a bare metal compute instance on OCI:

Install QEMU

Qemu is available in the ol7_kvm_utils repo. Enable that repo and install qemu

sudo yum-config-manager --enable ol7_kvm_utils sudo yum install qemu Install and Enable Docker

Next, install and enable Docker.

sudo yum install docker-engine sudo systemctl start docker sudo systemctl enable docker Install kata-runtime and Configure Docker to Use It

First, configure yum for access to the Oracle Linux Cloud Native Environment - Developer Preview yum repository by installing the oracle-olcne-release-el7 RPM:

sudo yum install oracle-olcne-release-el7

Now, install kata-runtime:

sudo yum install kata-runtime

To make the kata-runtime an available runtime in Docker, modify Docker settings in /etc/sysconfig/docker. Make sure SELinux is not enabled.

The line that starts with OPTIONS should look like this:

$ grep OPTIONS /etc/sysconfig/docker OPTIONS='-D --add-runtime kata-runtime=/usr/bin/kata-runtime'

Next, restart Docker:

sudo systemctl daemon-reload sudo systemctl restart docker Run a Container Using Oracle Container Runtime for Kata

Now you can use the usual docker command to run a container with the --runtime option to indictate you want to use kata-runtime. For example:

sudo docker run --rm --runtime=kata-runtime oraclelinux:7 uname -r Unable to find image 'oraclelinux:7' locally Trying to pull repository docker.io/library/oraclelinux ... 7: Pulling from docker.io/library/oraclelinux 73d3caa7e48d: Pull complete Digest: sha256:be6367907d913b4c9837aa76fe373fa4bc234da70e793c5eddb621f42cd0d4e1 Status: Downloaded newer image for oraclelinux:7 4.14.35-1909.1.2.el7.container

To review what happened here. Docker, via the kata-runtime instructed KVM and QMEU to start a VM based on a special purpose kernel and minimized OS image. Inside the VM a container was created, which ran the uname -r command. You can see from the kernel version that a “special” kernel is running.

Running a container this way, takes more time than a traditional container based on namespaces and cgroups, but if you consider the fact that a whole VM is launched, it’s quite impressive. Let’s compare:

# time docker run --rm --runtime=kata-runtime oraclelinux:7 echo 'Hello, World!' Hello, World! real 0m2.480s user 0m0.048s sys 0m0.026s # time docker run --rm oraclelinux:7 echo 'Hello, World!' Hello, World! real 0m0.623s user 0m0.050s sys 0m0.023s

That’s about 2.5 seconds to launch a Kata Container versus 0.6 seconds to launch a traditional container.

Conclusion

Kata Containers represent an important phenomenon in the evolution of cloud native technologies. They address both the need for security through virtual machine isolation as well as speed of development through seamless integration into the existing container ecosystem without compromising on computing density.

In this blog post I’ve described some of the history that brought us Kata Containers as well as showed how you can experiment with them yourself with packages using Oracle Container Runtime for Kata.

Q3 FY19 GAAP EPS INCREASED TO $0.76 and NON-GAAP EPS UP 8% TO $0.87

Oracle Press Releases - Thu, 2019-03-14 15:00
Press Release
Q3 FY19 GAAP EPS INCREASED TO $0.76 and NON-GAAP EPS UP 8% TO $0.87 Operating Income Up 3% in USD and 7% in Constant Currency

Redwood Shores, Calif.—Mar 14, 2019

Oracle Corporation (NYSE: ORCL) today announced fiscal 2019 Q3 results. Total Revenues were $9.6 billion, down 1% in USD and up 3% in constant currency compared to Q3 last year. Cloud Services and License Support revenues were $6.7 billion, while Cloud License and On-Premise License revenues were $1.3 billion. Total Cloud Services and License Support plus Cloud License and On-Premise License revenues were $7.9 billion, unchanged in USD and up 3% in constant currency.

GAAP Operating Income was up 3% to $3.4 billion and GAAP Operating Margin was 35%. Non-GAAP Operating Income was up 2% to $4.3 billion and non-GAAP Operating Margin was 44%. GAAP Net Income increased to $2.7 billion and non-GAAP Net Income was down 8% to $3.2 billion. GAAP Earnings Per Share increased to $0.76 while non-GAAP Earnings Per Share was up 8% to $0.87.

Short-term deferred revenues were up 1% to $8.0 billion compared to a year ago. Operating Cash Flow was $14.8 billion during the trailing twelve months.

“I’m pleased with Q3 non-GAAP results as revenues grew 3%, operating income increased 5% and EPS grew 12% in constant currency,” said Oracle CEO, Safra Catz. “Our overall operating margin improved to 44% as our lower margin hardware business continued to get smaller while our higher margin cloud business continued to get bigger. With year-to-date non-GAAP EPS growth rate now at 16% in constant currency, we will comfortably deliver another year of double-digit EPS growth.”

“Our Fusion HCM, ERP, Supply Chain and Manufacturing Cloud applications revenue in total grew 32% in Q3,” said Oracle CEO, Mark Hurd. “Our NetSuite ERP Cloud applications also delivered strong results with a revenue growth rate of 30%. That said, let me call your attention to the following approved statement about Oracle’s entire applications business from industry analyst IDC.”

Per IDC’s latest annual market share results, Oracle is the #1 Enterprise Applications vendor in North America based on market share and revenue, surpassing Salesforce.com and SAP. 

Source: IDC Semiannual Software Tracker, Oct. 2018. Market share and revenue for 2H2017-1H2018. North America is the USA and Canada. Enterprise Applications refer to the IDC markets CRM, Enterprise Resource Management (including HCM, Financials, Procurement, Order Management, PPM, EAM), SCM, and Production Applications.

“The future of Oracle’s Cloud Infrastructure business rests upon our highly-secure Gen2 Cloud Infrastructure featuring the world’s first and only Autonomous Database,” said Oracle CTO, Larry Ellison. “By the end of Q3 we had nearly 1,000 paying Autonomous Database customers and we added around 4,000 new Autonomous Database trials in Q3. It’s early days, but this is the most successful introduction of a new product in Oracle’s forty year history.”

Oracle also announced that its Board of Directors declared a quarterly cash dividend of $0.24 per share of outstanding common stock, reflecting a 26% increase over the current quarterly dividend of $0.19.  Larry Ellison, Oracle’s Chairman of the Board, Chief Technology Officer and largest stockholder, did not participate in the deliberation or the vote on this matter.  This increased dividend will be paid to stockholders of record as of the close of business on April 11, 2019, with a payment date of April 25, 2019.

Q3 Fiscal 2019 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q3 results and fiscal 2019 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 9995836.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly- Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE:ORCL), visit us at www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our EPS and the future of Oracle’s Cloud Infrastructure business, are all "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud strategy, including our Oracle Software as a Service and Infrastructure as a Service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, integrate acquired products and services, or enhance and improve our existing products and support services in a timely manner, or price our products and services to meet market demand, customers may not purchase or subscribe to our software, hardware or cloud offerings or renew software support, hardware support or cloud subscriptions contracts. (3) Enterprise customers rely on our cloud, license and hardware offerings and related services to run their businesses and significant coding, manufacturing or configuration errors in our cloud, license and hardware offerings and related services could expose us to product liability, performance and warranty claims, as well as cause significant harm to our brand and reputation, which could impact our future sales. (4) If the security measures for our products and services are compromised and as a result, our customers’ data or our IT systems are accessed improperly, made unavailable, or improperly modified, our products and services may be perceived as vulnerable, our brand and reputation could be damaged and we may experience legal claims and reduced sales. (5) Our business practices with respect to data could give rise to operational interruption, liabilities or reputational harm as a result of governmental regulation, legal requirements or industry standards relating to consumer privacy and data protection. (6) Economic, political and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) Our international sales and operations subject us to additional risks that can adversely affect our operating results. (8) We have a selective and active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (SEC) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of March 14, 2019. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Getting Your Feet Wet With OCI Streams

OTN TechBlog - Thu, 2019-03-14 14:35

Back in December we announced the development of a new service on Oracle Cloud Infrastructure called Streaming.  The announcement, product page and documentation have a ton of use cases and information on why you might use Streaming in your applications, so let's take a look at the how.  The OCI Console allows you to create streams and test them out via the UI dashboard, but here's a simple example of how to both publish and subscribe to a stream in code via the OCI Java SDK.

First you'll need to create a stream.  You can do that via the SDK, but it's pretty easy to do via the OCI Console.  From the sidebar menu, select Analytics - Streaming and you'll see a list of existing streams in your tenancy and selected compartment.

Click 'Create Stream' and populate the dialog with the information requested:

After your stream has been created you can view the Stream Details page, which looks like this:

As I mentioned above, you can test out stream publishing by clicking 'Produce Test Message' and populating the message and then test receiving by refreshing the list of 'Recent Messages' on the bottom of the Stream Details page.

To get started working with this stream in code, download the Java SDK (link above) and make sure it's on your classpath.  After you've got the SDK ready to go, create an instance of a StreamClient which will allow you to make both 'put' and 'get' style requests.  Producing a message to the stream looks like so:

Reading the stream requires you to work with a Cursor.  I like to work with group cursors, because they handle auto committing so I don't have to manually commit the cursor, and here's how you'd create a group cursor and use it to get the stream messages.  In my application I have it in a loop and reassign the cursor that is returned from the call to client.getMessages() so that the cursor always remains open and active.

And that's what it takes to create a stream, produce a message and read the messages from the stream.  It's not a difficult feature to implement and the performance is comparable to Apache Kafka in my observations, but it's nice to have a native OCI offering that integrates well into my application.  There are also future integration plans for upcoming OCI services that will eventually allow you to publish to a stream, so stay tuned for that.

Announcement: “Oracle Performance Diagnostics and Tuning” Seminar – Australia/NZ Winter Dates

Richard Foote - Wed, 2019-03-13 22:57
I’m very excited to announce the first public running of my new “Oracle Performance Diagnostics and Tuning” Seminar throughout Australia and New Zealand this coming winter 2019. (See my Oracle Performance Diagnostics and Tuning Seminar page for all the seminar content and details). This is a must attend seminar aimed at Oracle professionals (both DBAs […]
Categories: DBA Blogs

OCI New Service Roundup

OTN TechBlog - Wed, 2019-03-13 16:28
This blog was originally published by Jesse Butler on the Cloud Native blog. 

OpenText Enterprise World Europe 2019 – Day 2

Yann Neuhaus - Wed, 2019-03-13 16:23

Day 2 of OTEW, we followed the global stream this morning which was taking most of the points from yesterday. But we had the pleasure to have a session from Dr. Michio Kaku, Theoretical Physicist, Futurist and popularizer of science. He wrote several books about physics and how he sees the future.

kaku

He sees us in the next 20 years ultra connected with internet lenses, the Moore’s law will collapse in 2025 where it will probably be replaced by Graphene technology (instead of basic transistors), which will, in an unknown perspective, be replaced by Quantum calculation machines (q-bits instead of bits). The main issue with quantum calculation is that q-bit are really disrupted by noises and electromagnetic waves (decoherence). According to him, internet will be replaced by brain net thanks to biological new technologies focusing on sensations instead of visualization.

What’s new and what’s next for OpenText Documentum

We were totally waiting for this session as we, documentum experts, were exited to see the future of this well spread technology. Micah Byrd, Director Product Management at OpenText started to talk about the generic integration roadmap with “Content in Context”, “Cloud”, “LoB and industry” and “Intelligent automation” and how Documentum interprets these guidelines.

Documentum will be more and more integrated to Office 365 thanks to the new UI Smart View. A Coherent solution across all platforms which allows easy and seamless fusion into leading applications like Word and SAP. This is content in context.

OpenText is aggressively pushing Documentum to the cloud since several years with custom solutions like private, managed or public cloud. With Private you keep your data on your data center (2014-2016). With Managed your data goes to OpenText cloud (2017-2018). With Public your data goes where you want on different cloud providers like AWS, Azure, Google and so on (2019). OpenText invests on containerization as well with Docker and Kubernetes for “Documentum from Everywhere”.

 Documentum future innovations

As part of the main new features we have the continous integration of Documentum in Office 365 which already supports Word and SAP and soon (EP7 in October) Excel, Power Point and Outlook. It means that you’ll be able to access Documentum data from Office softwares. In addition OpenText wants to enable Bi-Directional synchronization between Documentum and Core, implying possibilities of interrecting with content outside of the corporate network. Hence, the content will be synced no matter where, no matter when, in a secure and controlled way.

img10

Next to come is also improved content creation experience in D2 thanks to more integration of Brava! for annotation sharing as well as more collaborative capabilities with Share point (improvement of DC4SP).

img11

A new vision of security:

img12

D2 on mobile will come soon on IOS and Android, developed in AppWorks:

img13

We are particularly exited about a prototype which was presented today: the Documentum Security Dashboard. It gives a quick and easy view of user activities and tracks the content usage like views and downloads and can demonstrate trends about content evolution. We hope it will be released one day.

img14

Many more topics around Documentum components where presented but we will not provide details here about it, we were only focusing on main features.

Documentum D2 Demo

We had a chance to put our hands on the new D2 Smart View which brings reactivity and modernity. Our feeling about it is: SMOOTH.

img15

Conclusion

Another amazing day at the OTEW where we met a lot of expert people and attended interesting sessions about the huge OpenText world.

Cet article OpenText Enterprise World Europe 2019 – Day 2 est apparu en premier sur Blog dbi services.

Do I have to use the Navigator?

Jim Marion - Wed, 2019-03-13 16:20
Navigator exposed from the NavBarI have seen several very clever Navbar customizations including:
  • Auto-expand the Navigator when expanding the Navbar and
  • Showing the breadcrumb path in the Navigator.
These customizations seem quite valuable to anyone that uses the Navigator. And who doesn't use the Navigator? It is the primary delivered navigation method for Classic content. But are we really supposed to depend on the Navigator? If so, should these customizations be incorporated into the product? Or are we missing the point of Fluid navigation? Does Fluid provide an alternative?

Let's start with a review of Self-Service. With a complete Self-Service Fluid rollout, do you need to use the Navigator to launch any Self-Service functionality? No. Every Self-Service transaction is available from a tile. Consider Personal Details. When an HCM Self-Service user launches Personal Details from a tile, PeopleSoft opens a WorkCenter-like experience, allowing the user to navigate through the Personal Details components using a left-hand sidebar. Again, did we need the Navigator for any of this functionality? No. But that was Fluid. What about Classic? In PeopleSoft HCM PUM 29 there are 400+ Fluid components and nearly 7,000 Classic components. How would you navigate to those 7,000 Classic components without the Navigator? Classic components predate Fluid and therefore aren't represented by tiles. Imagine if they were? How many homepages would you need to house 7,000 tiles? How many tiles would you have per homepage? Too many! So we use the navigator... but wait!

Let's review the list of Fluid navigation options:

  • Homepages
  • Tiles
  • Navigation Collections (published as tiles)
  • Related Actions
  • Activity Guides (Fluid, optimized as well as HCM ESS Activity Guides with categories)
  • WorkCenters (Enterprise Components Fluid WorkCenters or Classic WorkCenters)
  • Master/Detail
  • Side page 1
  • Two-panel layout

Many of these options are configurable and do not require Application Designer (Developer not required).

Fluid WorkCenter (Master/Detail) with Classic+ Components
Here is how I believe Fluid navigation should work. Keep in mind that Fluid navigation spans both Classic and Fluid components. Fluid navigation is not just for Fluid Components.


      Role-based homepage with business process-based tiles
    1. Homepages should be role based. My homepage collection should depend on the hats I wear in my organization.
    2. Within each homepage, I should have business process-based tiles. These tiles should launch WorkCenter-like Navigation Collections, Activity Guides, and so on. For example, if I am a PeopleSoft developer, then I should see a tile for managing security. When launched, that security tile will display a left-hand panel for navigating within the Security business process. If I manage payroll, then I might expect to find a tile labeled "Payroll WorkCenter USA" that includes navigation for all of the components associated with the Payroll business process. Remember, the items in the left-hand sidebar of a Navigation Collection or WorkCenter may be a combination of Classic, Classic +, and Fluid.
    3. From certain transaction pages, I should see Related Actions that allow me to drill from one transaction to a related transaction.
    Related Actions that drill from one component to anotherDone right, 95+% of my work will launch from tiles. The Navigator becomes my safety net. I reach for the Navigator once a year or every few years to complete some obscure configuration task reserved for implementation.


    What about the Navbar? We often think of the Navbar as an intermediate step used to launch the Navigator, but the Navbar is a homepage of tiles. Instead of a container for the Navigator, the Navbar is an always-present homepage with tiles I can launch from anywhere in PeopleSoft. Let's say you work in Procurement and often answer questions about Purchase Orders. You have your regular buyer and procurement duties, but you must be ready at a moment's notice to answer a question or solve a problem. To prepare for the inevitable interruption, you add your most common inquiry business process tiles to the Navbar. You are now two-clicks from the answer to any question.

    Now I ask you, "if you never use the Navigator, do you still desire a customization to automatically expand the Navigator when opening the Navbar?" I think not.

    How did we get here? I believe we are in an intermediate navigational state. Classic used breadcrumbs. Fluid uses business processes. I believe the problem is that our Classic content was moved into the Fluid navigation paradigm (PeopleTools 8.55) without usable business process maps (Navigation Collections, WorkCenters, and so on). We, therefore, must build our own business process maps using Fluid navigation tools to align Classic content with Fluid navigation.

    Building navigation is a critical phase of any Fluid implementation. Get it wrong and you may find yourself rolling back Fluid in favor of Classic (no joke, I have seen this before). When implementing Fluid we often focus on Self-Service, and rightly so. Self-Service comprises the majority of our headcount. But often Self-Service users are a minority of our actual time spent using PeopleSoft. Oracle has done a great job of building Fluid navigation for Self-Service users. What's missing? Fluid navigation for Classic. Today that is our job. As developers and business analysts, we must build that missing business process based navigation for our back office users.

    We believe that navigation is a critical component to a successful Fluid implementation and that is why we devote the first day of our Fluid 1 course to Fluid navigation. To learn more or to schedule a course, visit us online at jsmpros.com.


    review: architecting microsoft azure solutions

    Dietrich Schroff - Wed, 2019-03-13 15:15
    Last week i read the exam ref "architecting microsoft azure solutions"

    The book cover states
    Designed for architects and other cloud professionals ready to advance their status, Exam Ref focuses on the critical thinking and decision-making acumen needed for success at the MCSA level. The book "Architecting Microsoft Azure Solutions" comes with 320 pages and 6 chapters. The claim of the book: "This book teaches you how to design and architect secure, highly-available, performant, monitored and resilient solutions on Azure".

    The first chapter is "Design compute infrastructure". The beginning is clearly structured: Fault Domains, Availabilty Sets and Update Domains. Unfortunately, when listing the VM types, thera are various letters shown, but an explanation of the abbreviations of that letters is missing.
    The sub-chapter Migration contains only many URLs. Helpful examples are not provided. The next subchapters serverless computing and microservices are not worth reading. It is not at all clear which requirements have to be met in order to build an application serverless or in a container. But there are many comparisons when serverless computing fits better than microservices.
    The subchapter "Design Web Applications" loses itself in general considerations regarding availability and description of REST.
    The biggest problem with Chapter 1 is that there is a lack of examples that allow the topics to be played through once. Also missing at the end of the chapter of the typical question catalog, with which one could prepare for an exam.

    After chapter 1 I did not want to read any further - that would have been a mistake. For all who buy this book: skip Chapter 1!

    The chapters 2 and 3 (Storage & Networking) are really good. They provide brief explanations and for every use case detailed instructions for the Azure command line or the portal including screenshots are presented. Both chapters are very well written and give an overview of the respective topics. Here is a list for the storage chapter: Blob Storage, Azure Files, Azure Disks, Azure Data Catalog, Azure Data Factory, SQL Data Warehouse, Data Lake Analytics, Analysis Services, HDInsight, SQL Database, SQL Server Stretch Database, MySQL, Postgresql , Redis Cache, Data Lake, Azure Search, Azure Time Series, Comsmos DB, MongoDB. There is no topic left open. The same applies to the network chapter.

    Chapter 4 "Design security and identity solutions" is very well structured. All terms are introduced at the beginning and then various options with sequence diagrams are played through. Subsequently, the appropriate services such as Azure Active Directoy are introduced. Very nice here is the representation of the integration possibilities with ASP.Net. Otherwise, topics such as integration with Office 365 (calendar access) or key management in the cloud are highlighted.

    The fifth chapter is, in my view, more an outlook: "Design solutions by using platform service". Here are the topics like AI, IoT, streaming treated. Here you can take with you, what is possible and what building blocks Azure provides.

    The final chapter "Design for operations" deals with cross-functionalities such as monitoring and alarming. A wrapper for the following services will be delivered: Azure Monitor, Azure Advisor, Azure Service Health, Azure Activity Log, Azure Dashboard, Azure Metrics Explorer, Azure Alerts, Azure Log Analytics, Azure Application Insights. Almost every topic has an example including configuration via the Azure portal.

    Conclusion: Except for the first chapter a very good book to get started. It is not good for exam preparation, as no questionnaires / multiple choice lists are included. It is a pity that the subchapters have no numbering and you have to navigate with the font sizes. Nevertheless, you will hardly find a faster entry into Azure.



    Nine Ways Oracle Cloud is Open

    OTN TechBlog - Wed, 2019-03-13 12:58

    In the recent Break New Ground paper, 10 Predictions for Developers in 2019, openness was cited as a key factor. Developers want to choose their clouds based on openness. They want a choice of languages, databases, and compute shapes, among other things. This allows them to focus on what they care about – creating – without ops concerns or lock in. In this post, we outline the top ways that Oracle is delivering a truly open cloud. 

    Databases

    Oracle Cloud’s Autonomous Database, which is built on top of Oracle Database, conforms to open standards, including ISO SQL:2016, JDBC, Python PEP 249, ODBC, and many more. Autonomous Database is a multi-model database and supports relational as well as non-relational data, such as JSON, Graph, Spatial, XML, Key/Value, Text, amongst others. Because Oracle Autonomous Database is built on Oracle Database technology, customers can “lift and shift” workloads from/to other Oracle Database environments, including those running on third-party clouds and on-premises infrastructure. This flexibility makes Oracle Autonomous Database a truly open cloud service compared to other database cloud services in the market. Steve Daheb from Oracle Cloud Platform provides more information in this Q&A.

    In addition, Oracle MySQL continues to be the world's most popular open source database (source code) and is available in Community and Enterprise editions. MySQL implements standards such as ANSI/ISO SQL, ODBC, JDBC and ECMA. MySQL can be deployed on-premises, on Oracle Cloud, and on other clouds.

    Integration Cloud

    With Oracle Data Integration Platform, you can access numerous Oracle and non-Oracle sources and targets to integrate databases with applications. For example, you can use MySQL databases on a third-party cloud as a source for Oracle apps, such as ERP, HCM, CX, NetSuite, and JD Edwards. In addition, Integration Cloud allows you to integrate Oracle Big Data Cloud, Hortonworks Data Platform, or Cloudera Enterprise Hub with a variety of sources: Hadoop, NoSQL, or Oracle Database.

    You can also connect apps on Oracle Cloud with third-party apps. Consider a Quote to Order system. When a customer accepts a quote, the salesperson can update it in the CRM system, leverage Oracle’s predefined integration flows, with Oracle ERP Cloud, and turn the quote into an order.  

    Java

    Java is one of the top programming languages on Github (Oracle Code One 2018 keynote), with over 12 million developers in the community. All development for Java happens in OpenJDK and all design and code changes are visible to the community. Therefore, the evolution of ongoing projects and features is transparent. Oracle has been talking with developers who are and aren’t using Java to ensure that Java remains open and free, while making enhancements to OpenJDK. In 2018, Oracle open sourced all remaining closed source features: Application Class Data Sharing, Project ZGC, Flight Recorder and Mission Control. In addition, Oracle delivers binaries that are pure OpenJDK code, under the GPL, giving developers freedom to distribute them with frameworks and applications.

    Oracle Cloud Native Services, including Oracle Container Engine for Kubernetes

    Cloud Native Services include the Oracle Container Engine for Kubernetes and Oracle Cloud Infrastructure Registry. Container Engine is based off unmodified Kubernetes codebase and clusters can support bare-metal nodes, virtual machines or heterogeneous BM/VM environments. Oracle’s Registry is based off open Docker v2 standards, allowing you to use the same Docker commands to interact with it as you would with Docker Hub. Container images can be used on-premises and on Container Engine giving you portability. It can also interoperate with third-party registries and Oracle Cloud Infrastructure Registry with third-party Kubernetes environments. In addition Oracle Functions is based off the open source Fn Project. Code written for Oracle Functions will therefore run not only on Oracle Cloud, but with Fn clusters on third-party clouds and on-premises environments as well.

    Oracle offers the same cloud native capabilities as part of Oracle Linux Cloud Native Environment. This is a curated set of open source Cloud Native Computing Foundation (CNCF) projects that can be easily deployed, have been tested for interoperability, and for which enterprise-grade support is offered. With Oracle’s Cloud Native Framework, users can run cloud native applications in the Oracle Cloud and on-premises, in an open hybrid cloud and multi-cloud architecture.

    Oracle Linux Operating System

    Oracle Linux, which is included with Oracle Cloud subscriptions at no additional cost, is a proven, open source operating system (OS) that is optimized for performance, scalability, reliability, and security. It powers everything in the Oracle Cloud – Applications and Infrastructure services. Oracle extensively tests and validates Oracle Linux on Oracle Cloud Infrastructure, and continually delivers innovative new features to enhance the experience in Oracle Cloud.

    Oracle VM VirtualBox

    Oracle VM VirtualBox is the world’s most popular, open source, cross-platform virtualization product. It lets you run multiple operating systems on Mac OS, Windows, Linux, or Oracle Solaris. Oracle VM VirtualBox is ideal for testing, developing, demonstrating, and deploying solutions across multiple platforms on one machine. It supports exporting of virtual machines to Oracle Cloud Infrastructure and enables them to run on the cloud. This functionality facilitates the experience of using VirtualBox as the development platform for the cloud.

    Identity Cloud Services

    Oracle Identity Cloud Service provides 100% API coverage of all product capabilities for rich integration with custom applications. It allows compliance with open standards such as SCIM, REST, OAuth and OpenID Connect for easy application integrations. Customers can easily consume these APIs in their applications to take advantage of identity management capabilities.

    Oracle Identity Cloud Service seamlessly interoperates with on-premises identities in Active Directory to provide Single Sign On between Cloud and On-Premises applications. Through its Identity Bridge component, Identity Cloud can synchronize all the identities and groups from Active Directory into its own identity store in the cloud. This allows organizations to take advantage of their existing investment in Active Directory. And, they can extend their services to Oracle Cloud and external SaaS applications.

    Oracle Blockchain Platform

    Oracle Blockchain Platform is built on open source Hyperledger Fabric making it interoperable with non-Oracle Hyperledger Fabric instances deployed in your data center or in third-party clouds. In addition, the platform uses REST APIs for plug-n-play integration with Oracle SaaS and on-premises apps such as NetSuite ERP, Flexcube core banking, Open Banking API Platform, among others.

    Oracle Mobile Hub (Mobile Backend as a Service – MBaaS)

    Oracle Mobile Hub is an open and flexible platform for mobile app development. With Mobile Hub, you can:

    • Develop apps for any mobile client: iOS or Android based phones

    • Connect to any backend via a standard RESTful interfaces and SOAP web services

    • Support both native mobile apps and hybrid apps. For example, you can develop with Swift or Objective C for native iOS apps, Java for native Android apps, and JavaScript for Hybrid mobile apps

    In addition, Oracle Visual Builder (VB) is a cloud-based software development Platform as a Service (PaaS) and a hosted environment for your application development infrastructure. It provides an open source, standards-based solution to develop, collaborate on, and deploy applications within Oracle Cloud that provides an easy way to create and host web and mobile applications in a secure cloud environment.

    Takeaway

    In choosing a cloud vendor, openness can provide a significant advantage, allowing you to choose amongst languages, databases, hardware, clouds, and on-premises infrastructure.  With a free trial on Oracle Cloud, you can experience the benefits of these open technologies – no strings attached.

    Feel free to start a conversation below.

    DBID Is Not Definitive When Used As An Identifier

    Pete Finnigan - Wed, 2019-03-13 09:46
    Our Audit Trail toolkit PFCLATK has some brief documentation on the page that's linked here but in summary it is a comprehensive toolkit that allows quick and easy deployment of an audit trail into a customers database. We are currently....[Read More]

    Posted by Pete On 12/03/19 At 09:20 PM

    Categories: Security Blogs

    Hash Partitions

    Jonathan Lewis - Wed, 2019-03-13 08:13

    Here’s an important thought if you’ve got any large tables which are purely hash partitioned. As a general guideline you should not need partition level stats on those tables. The principle of hash partitioned tables is that the rows are distributed uniformly and randomly based on the hash key so, with the assumption that the number of different hash keys is “large” compared to the number of partitions, any one partition should look the same as any other partition.

    Consider, as a thought experiment (and as a warning), a table of product_deliveries which is hash partitioned by product_id with ca. 65,000 distinct products that have been hashed across 64 partitions. (Hash partitioning should always use a power of 2 for the partition count if you want the number of rows per partition to be roughly the same across all partitions – if you don’t pick a power of two then some of the partitions will be roughly twice the size of others.)

    Consider a query for “deliveries to Basingstoke” – in the absence of a histogram on the delivery location the optimizer will produce a cardinality estimate that is:

    • total rows in table / number of distinct delivery locations in table

    Now consider a query for: “deliveries of product X to Basingstoke” – again in the absence of histograms. The optimizer could have two ways of calculating this cardinality:

    • total rows in table / (number of distinct products in table * number of distinct delivery locations in table)
    • total rows in relevant partition / (number of distinct products in relevant partition * number of distinct delivery locations in relevant partition)

    But given the intent of hash partitioning to distribute data evenly we can make three further observations:

    1. the number of rows in any one partition should be very similar to the number of rows in the table divided by the number of partitions
    2. the number of distinct products in any one partition should be very similar to the number of products in the table divided by the number of partitions
    3. the number of distinct locations in any one partition should be very similar to the number of distinct locations in the whole table.

    The second condition holds because product is the partition key, the third holds because location is not the partition key.

    So we can rewrite the second, partition-oriented, formula as:

    • (total rows in table / number of partitions) / ((number of distinct products in table / number of partitions) * number of distinct locations in table)

    which, re-arranging parentheses and cancelling common factors, reduces to:

    • total rows in table / (number of distinct products in table * number of distinct locations in table)

    which matches the first formula. (Q.E.D.) In the absence of any statistics on hash partitions the optimizer can (ought to be able to) produce reasonable cardinality estimates based purely on table-level stats.

    In fact if you look back into the history of partitioning this observation is implicit in the early days of composite partitioning when the only option was for range/hash composite partitions – the optimizer never used sub-partition stats to calculate costs or cardinality it used only partition-level statistics. (And it was several years before the optimizer caught up to the fact that (e.g.) range/list composite partitioning might actually need to do arithmetic based on subpartition stats.)

    I did say that the example was also a warning. Hash partitioning is “expected” to have a large number of distinct key values compared to the number of partitions. (If you don’t meet this requirement then possibly you should be using list partitioning). There’s also a “uniformity” assumption built into the arithmetic (both the basic arithmetic and the hand-waving discussion I produced above). Just imagine that your company supplies a handful of products that for some strange reason are incredibly popular  in Basingstoke. If this is the case then the assumption that “all partitions look alike” is weakened and you would have to consider the possibility that the variation would require you to produce a workaround to address problems of poor cardinality estimates that the variation might produce.

    A pattern of this type has two generic effects on the optimizer, of course. First is the simple skew in the data – to have a significant impact the number of rows for the problem products would have to be much larger than average, which suggests the need for a suitably crafted histogram; secondly there’s an implied correlation between a few products and Basingstoke, so you might even end up creating a column group and manually coding a histogram on it to capture the correlation.

     

    March 2019 Update to Integrated SOA Gateway for EBS 12.1.3

    Steven Chan - Wed, 2019-03-13 08:13

    Contributing Author:  Robert Farrington

    I am pleased to announce availability of consolidated patch for Oracle E-Business Suite Integrated SOA Gateway for Oracle E-Business Suite Release 12.1.3. Oracle strongly recommends all EBS R12.1.3 Integrated SOA Gateway customers apply this set of updates.

    You can download the patch from here:

    This cumulative update includes all previously released ISG updates for EBS R12.1.3, plus minor enhancements and fixes for stability and performance of the product. Specific fixes relate to REF CURSOR and SYS_REFCURSOR data types in PL/SQL interface types, and design time operation from the Integration Repository UI when the API contains more than 10 operations. View the patch Readme for a complete listing of bugs fixed in this patch.

    In addition, the following key enhancements are included in this update:

    • Post-clone script for SOAP and REST based web services provided by Integrated SOA Gateway in EBS R12.1.3
    • Automated design time operations in multi-node configuration
    • Elimination of the need to perform service generation and deployment from the backend for multi-node configuration of Integrated SOA Gateway in Oracle E-Business Suite R12.1.3 (MOS Note 1081100.1)
    • Capability to generate and deploy SOAP Services, and also deploy REST Services from the Integration Repository UI for a multi-node environment

    References

    Related Articles

    Categories: APPS Blogs

    [Solved] APP-FND-01564: ORACLE error 8102 in request

    Online Apps DBA - Wed, 2019-03-13 07:35

    For All Apps DBAs Have you ever faced [APP-FND-01564: ORACLE error 8102] while Running Purge Concurrent Request and/or Manager Data? Follow the steps at https://k21academy.com/appsdba48 to Solve the above error by knowing 1) What this Issue is all about? 2) Error, Causes & Solution 3) Troubleshooting with Step-wise Flow. For All Apps DBAs Have you […]

    The post [Solved] APP-FND-01564: ORACLE error 8102 in request appeared first on Oracle Trainings for Apps & Fusion DBA.

    Categories: APPS Blogs

    Selecting Optimal Parameters for XGBoost Model Training

    Andrejus Baranovski - Wed, 2019-03-13 02:22
    There is always a bit of luck involved when selecting parameters for Machine Learning model training. Lately, I work with gradient boosted trees and XGBoost in particular. We are using XGBoost in the enterprise to automate repetitive human tasks. While training ML models with XGBoost, I created a pattern to choose parameters, which helps me to build new models quicker. I will share it in this post, hopefully you will find it useful too.

    I’m using Pima Indians Diabetes Database for the training, CSV data can be downloaded from here.

    This is the Python code which runs XGBoost training step and builds a model. Training is executed by passing pairs of train/test data, this helps to evaluate training quality ad-hoc during model construction:

    Key parameters in XGBoost (the ones which would affect model quality greatly), assuming you already selected max_depth (more complex classification task, deeper the tree), subsample (equal to evaluation data percentage), objective (classification algorithm):
    • n_estimators — the number of runs XGBoost will try to learn 
    • learning_rate — learning speed 
    • early_stopping_rounds — overfitting prevention, stop early if no improvement in learning 
    When model.fit is executed with verbose=True, you will see each training run evaluation quality printed out. At the end of the log, you should see which iteration was selected as the best one. It might be the number of training rounds is not enough to detect the best iteration, then XGBoost will select the last iteration to build the model.

    With matpotlib library we can plot training results for each run (from XGBoost output). This helps to understand if iteration which was chosen to build the model was the best one possible. Here we are using sklearn library to evaluate model accuracy and then plotting training results with matpotlib:

    Let’s describe my approach to select parameters (n_estimators, learning_rate, early_stopping_rounds) for XGBoost training.

    Step 1. Start with what you feel works best based on your experience or what makes sense
    • n_estimators = 300 
    • learning_rate = 0.01 
    • early_stopping_rounds = 10 
    Results:
    • Stop iteration = 237 
    • Accuracy = 78.35% 
    Results plot:


    With the first attempt, we already get good results for Pima Indians Diabetes dataset. Training was stopped at iteration 237. Classification error plot shows a lower error rate around iteration 237. This means learning rate 0.01 is suitable for this dataset and early stopping of 10 iterations (if the result doesn’t improve in the next 10 iterations) works.

    Step 2. Experiment with learning rate, try to set a smaller learning rate parameter and increase number of learning iterations
    • n_estimators = 500 
    • learning_rate = 0.001 
    • early_stopping_rounds = 10 
    Results:
    • Stop iteration = didn’t stop, spent all 500 iterations 
    • Accuracy = 77.56% 
    Results plot:


    Smaller learning rate wasn’t working for this dataset. Classification error almost doesn’t change and XGBoost log loss doesn’t stabilize even with 500 iterations.

    Step 3. Try to increase the learning rate.
    • n_estimators = 300 
    • learning_rate = 0.1 
    • early_stopping_rounds = 10 
    Results:
    • Stop iteration = 27 
    • Accuracy = 76.77% 
    Results plot:


    With increased learning rate, the algorithm learns quicker, it stops already at iteration Nr. 27. XGBoost log loss error is stabilizing, but the overall classification accuracy is not ideal.

    Step 4. Select optimal learning rate from the first step and increase early stopping (to give the algorithm more chances to find a better result).
    • n_estimators = 300 
    • learning_rate = 0.01 
    • early_stopping_rounds = 15 
    Results:
    • Stop iteration = 265 
    • Accuracy = 78.74% 
    Results plot:


    A slightly better result is produced with 78.74% accuracy — this is visible in the classification error plot.

    Resources:

    Upgrading SQL Server pods on K8s and helm charts

    Yann Neuhaus - Wed, 2019-03-13 02:08

    It has been while since my last blog. Today it is about continuing with helm charts and how to upgrade / downgrade SQL Server containers to a specific cumulative update. My first write-up in my to-do list.

    blog 149 - 0 - banner

    Last year, I wrote an introduction of SQL Server containers on K8s. I remembered to face some issues when testing upgrade scenarios (probably a lack of knowledge). Since then, I have discovered helm charts and I use them intensively with my environments and they also provide upgrade / rollback capabilities.

    So, the question is how to upgrade an existing SQL Server container to a new cumulative update with a helm chart?

    First of all, during deployment you need to specify a strategy type. There are several strategy types and most of them address upgrade scenarios with stateless applications (ramped, blue/green, canary and a/b testing). Unfortunately, with stateful applications like SGBDRs the story is not the same because persistent storage cannot be accessed by several at time. In this case K8s must first stop and remove the current pod and then spin up a new pod with the new version. “recreate” strategy type is designed to carry out this task and to address SQL Server pod upgrade scenarios.

    My deployment file is as follow:

    apiVersion: apps/v1beta2
    kind: Deployment
    metadata:
      name: {{ template "mssql.fullname" . }}
      labels:
        app: {{ template "mssql.name" . }}
        chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
        release: {{ .Release.Name }}
        heritage: {{ .Release.Service }}
    {{- if .Values.deployment.annotations }}
      annotations:
    {{ toYaml .Values.deployment.annotations | indent 4 }}
    {{- end }}
    spec:
      replicas: {{ .Values.replicaCount }}
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: {{ template "mssql.name" . }}
          release: {{ .Release.Name }}
      template:
        metadata:
          labels:
            app: {{ template "mssql.name" . }}
            release: {{ .Release.Name }}
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
                - name: ACCEPT_EULA
                  value: "{{ .Values.acceptEula.value | upper }}"
                - name: MSSQL_PID
                  value: "{{ .Values.edition.value }}"
                - name: MSSQL_SA_PASSWORD
                  valueFrom:
                   secretKeyRef:
                     name: {{ template "mssql.fullname" . }}-sa-secret
                     key: sapassword
                - name: MSSQL_TCP_PORT
                  value: "{{ .Values.service.port.value }}"
                - name: MSSQL_LCID
                  value: "{{ .Values.lcid.value }}"
                - name: MSSQL_COLLATION
                  value: "{{ .Values.collation.value }}"
                - name: MSSQL_ENABLE_HADR
                  value: "{{ .Values.hadr.value }}"
                {{ if .Values.resources.limits.memory }}
                - name: MSSQL_MEMORY_LIMIT_MB
                  valueFrom:
                    resourceFieldRef:
                      resource: limits.memory
                      divisor: 1Mi
                {{ end }}
              ports:
                - name: mssql
                  containerPort: {{ .Values.service.port.value }}
              volumeMounts:
                - name: data
                  mountPath: /var/opt/mssql/data
              livenessProbe:
                tcpSocket:
                   port: mssql
                initialDelaySeconds: {{ .Values.livenessprobe.initialDelaySeconds }}
                periodSeconds: {{ .Values.livenessprobe.periodSeconds }}
              readinessProbe:
                tcpSocket:
                   port: mssql
                initialDelaySeconds: {{ .Values.readinessprobe.initialDelaySeconds }}
                periodSeconds: {{ .Values.readinessprobe.periodSeconds }}
              resources:
    {{ toYaml .Values.resources | indent 12 }}
        {{- if .Values.nodeSelector }}
          nodeSelector:
    {{ toYaml .Values.nodeSelector | indent 8 }}
        {{- end }}
          volumes:
          - name: data
          {{- if .Values.persistence.enabled }}
            persistentVolumeClaim:
              {{- if .Values.persistence.existingDataClaim }}
              claimName: {{ .Values.persistence.existingDataClaim }}
              {{- else }}
              claimName: {{ template "mssql.fullname" . }}-data
              {{- end -}}
          {{- else }}
            emptyDir: {}
          {{- end }}

     

    My default values (in values.yaml) are the following:

    # General parameters
    acceptEula: 
      value: "Y"
    edition: 
      value: "Developer"
    collation: 
      value: SQL_Latin1_General_CP1_CI_AS
    lcid: 
      value: 1033
    hadr: 
        value: 0
    # User parameters
    sapassword: 
      value: Password1
    # Image parameters
    image:
      repository: mcr.microsoft.com/mssql/server
      tag: 2017-CU12-ubuntu
      pullPolicy: IfNotPresent
    # Service parameters
    service:
      type: 
        value: LoadBalancer
      port: 
        value: 1460
      annotations: {}
    deployment:
      annotations: {}
    # Volumes & persistence parameters
    persistence:
      enabled: true
      storageClass: ""
      dataAccessMode: ReadWriteOnce
      dataSize: 5Gi
    # Probe parameters
    livenessprobe:
      initialDelaySeconds: 20
      periodSeconds: 15
    readinessprobe:
      initialDelaySeconds: 20
      periodSeconds: 15
    # Resourcep parameters
    resources:
      limits:
      #  cpu: 100m
        memory: 3Gi
      # requests:
      #  cpu: 100m
      #  memory: 2Gi
    nodeSelector: {}
    
    

    You may notice I will pull a SQL Server image from the MCR with the 2017-CU12-ubuntu tag.

    Let’s now install SQL2017container release:

    $ helm install --name sql2017container .

     

    This command will install a helm release which includes among others a deployment, a replicaset with one pod (my SQL Server pod), a secret that contains the sa password, a persistence volume claim to persistent my database files (mapped to the /var/opt/mssql/data path inside the pod) and the service to expose the pod on port 1460 TCP.

    $ helm status sql2017container
    LAST DEPLOYED: Tue Mar 12 20:36:12 2019
    NAMESPACE: ci
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/Secret
    NAME                                        TYPE    DATA  AGE
    sql2017container-dbi-mssql-linux-sa-secret  Opaque  1     7m7s
    
    ==> v1/PersistentVolumeClaim
    NAME                                   STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
    sql2017container-dbi-mssql-linux-data  Bound   pvc-18304483-44fe-11e9-a668-ca78ebdc2a19  5Gi       RWO           default       7m7s
    
    ==> v1/Service
    NAME                              TYPE          CLUSTER-IP    EXTERNAL-IP     PORT(S)         AGE
    sql2017container-dbi-mssql-linux  LoadBalancer  10.0.104.244  xx.xx.xx.xx  1460:31502/TCP  7m6s
    
    ==> v1beta2/Deployment
    NAME                              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    sql2017container-dbi-mssql-linux  1        1        1           1          7m6s
    
    ==> v1/Pod(related)
    NAME                                               READY  STATUS   RESTARTS  AGE
    sql2017container-dbi-mssql-linux-76b4f7c8f5-mmhqt  1/1    Running  0         7m6s

     

    My SQL Server pod is running with the expected version and CU:

    master> select @@version AS [version];
    +-----------+
    | version   |
    |-----------|
    | Microsoft SQL Server 2017 (RTM-CU12) (KB4464082) - 14.0.3045.24 (X64)
            Oct 18 2018 23:11:05
            Copyright (C) 2017 Microsoft Corporation
            Developer Edition (64-bit) on Linux (Ubuntu 16.04.5 LTS)           |
    +-----------+
    (1 row affected)
    Time: 0.354s

     

    It’s time now to upgrade my pod with the latest CU13 (at the moment of this write-up). With helm charts this task is pretty simple. I will just upgrade my release with the new desired tag as follows:

    $ helm upgrade sql2017container . --set=image.tag=2017-CU13-ubuntu
    Release "sql2017container" has been upgraded. Happy Helming!

     

    Let’s dig further into deployment stuff:

    $ kubectl describe deployment sql2017container-dbi-mssql-linux

     

    The interesting part is below:

    Events:
      Type    Reason             Age   From                   Message
      ----    ------             ----  ----                   -------
      Normal  ScalingReplicaSet  18m   deployment-controller  Scaled up replica set sql2017container-dbi-mssql-linux-76b4f7c8f5 to 1
      Normal  ScalingReplicaSet  1m    deployment-controller  Scaled down replica set sql2017container-dbi-mssql-linux-76b4f7c8f5 to 0
      Normal  ScalingReplicaSet  1m    deployment-controller  Scaled up replica set sql2017container-dbi-mssql-linux-799ff7979b to 1

     

    Referring to the deployment strategy, the deployment controller has recreated a new ReplicaSet (and a new SQL Server pod) accordingly. A quick check from client tool confirms the instance has been upgraded correctly:

    master> select @@version AS [version];
    +-----------+
    | version   |
    |-----------|
    | Microsoft SQL Server 2017 (RTM-CU13) (KB4466404) - 14.0.3048.4 (X64)
            Nov 30 2018 12:57:58
            Copyright (C) 2017 Microsoft Corporation
            Developer Edition (64-bit) on Linux (Ubuntu 16.04.5 LTS)           |
    +-----------+
    (1 row affected)
    Time: 0.716s

     

    Another interesting part is how SQL Server detects the new image and starts upgrading process. Let’s dump the SQL Server log pod. I just put a sample of messages from the pod log to get a picture of scripts used during the upgrade.

    $ kubectl logs sql2017container-dbi-mssql-linux-799ff7979b-knqrm
    2019-03-12 19:54:59.11 spid22s     Service Broker manager has started.
    2019-03-12 19:54:59.44 spid6s      Database 'master' is upgrading script 'ProvisionAgentIdentity.sql' from level 234884069 to level 234884072.
    2019-03-12 19:54:59.45 spid6s      Database 'master' is upgrading script 'no_op.sql' from level 234884069 to level 234884072.
    2019-03-12 19:54:59.70 spid6s      Database 'master' is upgrading script 'no_op.sql' from level 234884069 to level 234884072.
    ….
    2019-03-12 19:54:59.70 spid6s      -----------------------------------------
    2019-03-12 19:54:59.70 spid6s      Starting execution of dummy.sql
    2019-03-12 19:54:59.70 spid6s      -----------------------------------------
    …
    2019-03-12 19:55:00.24 spid6s      Starting execution of PRE_MSDB.SQL
    2019-03-12 19:55:00.24 spid6s      ----------------------------------
    2019-03-12 19:55:00.70 spid6s      Setting database option COMPATIBILITY_LEVEL to 100 for database 'msdb'.
    2019-03-12 19:55:00.90 spid6s      -----------------------------------------
    2019-03-12 19:55:00.90 spid6s      Starting execution of PRE_SQLAGENT100.SQL
    2019-03-12 19:55:00.90 spid6s      -----------------------------------------
    …
    2019-03-12 19:55:12.09 spid6s      ----------------------------------
    2019-03-12 19:55:12.09 spid6s      Starting execution of MSDB.SQL
    2019-03-12 19:55:12.09 spid6s      ----------------------------------
    …
    2019-03-12 19:55:12.86 spid6s      -----------------------------------------
    2019-03-12 19:55:12.86 spid6s      Starting execution of MSDB_VERSIONING.SQL
    2019-03-12 19:55:12.86 spid6s      -----------------------------------------
    …
    2019-03-12 19:55:51.68 spid6s      -----------------------------------------
    2019-03-12 19:55:51.68 spid6s      Starting execution of EXTENSIBILITY.SQL
    2019-03-12 19:55:51.68 spid6s      -----------------------------------------
    …
    2019-03-12 19:56:01.51 spid6s      --------------------------------
    2019-03-12 19:56:01.51 spid6s      Starting execution of Alwayson.SQL
    2019-03-12 19:56:01.51 spid6s      --------------------------------
    …
    2019-03-12 19:56:29.17 spid6s      ------------------------------------
    2019-03-12 19:56:29.17 spid6s      Moving 2005 SSIS Data to 2008 tables
    2019-03-12 19:56:29.17 spid6s      ------------------------------------
    …
    2019-03-12 19:56:32.52 spid6s      ------------------------------------------------------
    2019-03-12 19:56:32.52 spid6s      Starting execution of UPGRADE_UCP_CMDW_DISCOVERY.SQL
    2019-03-12 19:56:32.52 spid6s      ------------------------------------------------------
    …
    2019-03-12 19:56:32.66 spid6s      ------------------------------------------------------
    2019-03-12 19:56:32.66 spid6s      Starting execution of SSIS_DISCOVERY.SQL
    2019-03-12 19:56:32.66 spid6s      ------------------------------------------------------
    …
    2019-03-12 19:56:32.83 spid6s      ------------------------------------------------------
    2019-03-12 19:56:32.83 spid6s      Start provisioning of CEIPService Login
    2019-03-12 19:56:32.83 spid6s      ------------------------------------------------------
    …

     

    A set of scripts developed by the SQL Server team runs during the SQL Server pod startup and updates different parts of the SQL Server instance.

    Helm provides a command to view release history …

    $ helm history sql2017container
    REVISION        UPDATED                         STATUS          CHART                   DESCRIPTION
    1               Tue Mar 12 20:36:12 2019        SUPERSEDED      dbi-mssql-linux-1.0.0   Install complete
    2               Tue Mar 12 20:53:26 2019        DEPLOYED        dbi-mssql-linux-1.0.0   Upgrade complete

     

    … and to rollback to previous release revision if anything goes wrong:

    $ helm rollback sql2017container 1

     

    The same process applies here. The deployment controller will recreate a ReplicaSet and a downgraded SQL Server pod to the previous version.

    $ kubectl describe deployment sql2017container-dbi-mssql-linux
    Events:
      Type    Reason             Age               From                   Message
      ----    ------             ----              ----                   -------
      Normal  ScalingReplicaSet  31m               deployment-controller  Scaled down replica set sql2017container-dbi-mssql-linux-76b4f7c8f5 to 0
      Normal  ScalingReplicaSet  31m               deployment-controller  Scaled up replica set sql2017container-dbi-mssql-linux-799ff7979b to 1
      Normal  ScalingReplicaSet  6m                deployment-controller  Scaled down replica set sql2017container-dbi-mssql-linux-799ff7979b to 0
      Normal  ScalingReplicaSet  6m (x2 over 49m)  deployment-controller  Scaled up replica set sql2017container-dbi-mssql-linux-76b4f7c8f5 to 1

     

    Same set of TSQL scripts seem to be executed again during the SQL Server pod startup for downgrade purpose this time.

    The release rollback is logged in the release history:

    $ helm history sql2017container
    REVISION        UPDATED                         STATUS          CHART                   DESCRIPTION
    1               Tue Mar 12 20:36:12 2019        SUPERSEDED      dbi-mssql-linux-1.0.0   Install complete
    2               Tue Mar 12 20:53:26 2019        SUPERSEDED      dbi-mssql-linux-1.0.0   Upgrade complete
    3               Tue Mar 12 21:18:57 2019        DEPLOYED        dbi-mssql-linux-1.0.0   Rollback to 1

     

    Rollback capabilities of helm charts (and implicitly of K8s) may be attractive but for database applications it will likely not fit with all upgrade scenarios. To be used sparingly … What’s next? Taking a look at the upgrade scenarios with availability groups on K8s for sure … see you on a next write-up!

     

     

    Cet article Upgrading SQL Server pods on K8s and helm charts est apparu en premier sur Blog dbi services.

    SQL Tuning – Mix NULL / NOT NULL Values

    Yann Neuhaus - Tue, 2019-03-12 18:41

    One of the difficulty when writing a SQL query (static SQL) is to have in the same Where Clause different conditions handling Null Values and Not Null Values for a predica.

    Let’s me explain you by an example :

    Users can entered different values for a user field from an OBI report:
    – If no value entered then all rows must be returned.
    – If 1 value entered then only row(s) related to the filter must be returned.
    – If List Of Values entered then only row(s) related to the filter must be returned.

    The SQL we want to write must take into account all the conditions possible (the 3 listed above).

    Here is the first version of the SQL query written by the customer :

    select * 
    from my_table a
    WHERE a.pt_name LIKE decode(:PT_PARAM, NULL, '%', '')
    OR a.pt_name IN (:PT_PARAM);
    

    :PT_PARAM is the user variable.

    The problem with this query is that the both conditions :
    – a.pt_name LIKE decode(:PT_PARAM, NULL, ‘%’, ”)
    – a.pt_name IN (:PT_PARAM)
    are always TRUE, so unnecessary work will be done by oracle optimizer.

    We can prove that by checking the execution plan :

    If :PT_PARAM is equal to ‘Value1′ :

    EXPLAIN PLAN FOR
    select * 
    from my_table a  
    WHERE a.pt_name LIKE decode('Value1', NULL, '%', '')
    OR a.pt_name IN ('Value1');
    
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
    
    Plan hash value: 1606647163
     
    --------------------------------------------------------------------------------------------------------
    | Id  | Operation                           | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                    |                  |     5 |  1140 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| MY_TABLE         |     5 |  1140 |     3   (0)| 00:00:01 |
    |   2 |   BITMAP CONVERSION TO ROWIDS       |                  |       |       |            |          |
    |   3 |    BITMAP OR                        |                  |       |       |            |          |
    |*  4 |     BITMAP INDEX SINGLE VALUE       | BIX_DMED_TERM_01 |       |       |            |          |
    |   5 |     BITMAP MERGE                    |                  |       |       |            |          |
    |*  6 |      BITMAP INDEX RANGE SCAN        | BIX_DMED_TERM_01 |       |       |            |          |
    --------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
       4 - access("A"."PT_NAME"='Value1')
       6 - access("A"."PT_NAME" LIKE NULL)
           filter("A"."PT_NAME" LIKE NULL AND "A"."PT_NAME" LIKE NULL)
    

    Oracle Optimizer does 2 access :
    – 1 access for NULL value
    – 1 access for ‘Value1′ value

    The first access is not necessary since the user has selected a Not Null Value (‘Value1′). Indeed if the user select one Not Null value (‘Value1′), we don’t want oracle execute condition for NULL value.

    To avoid this couple of access, it’s necessary to re-write the SQL statement like that :

    select * 
    from my_table a
    where (:PT_PARAM is null AND a.pt_name like '%')
    OR (:PT_PARAM IS NOT NULL AND a.pt_name in (:PT_PARAM));
    

    We just add a SQL clause indicating that if the first condition is TRUE, the second condition is FALSE and vice versa:
    if (:PT_PARAM is null AND a.pt_name like ‘%’) is TRUE then (:PT_PARAM IS NOT NULL AND a.pt_name in (:PT_PARAM)) is FALSE
    if (:PT_PARAM IS NOT NULL AND a.pt_name in (:PT_PARAM)) is TRUE then (:PT_PARAM is null AND a.pt_name like ‘%’) is FALSE

    Checking the execution plan related to the new SQL statement :

    EXPLAIN PLAN FOR
    select * 
    from my_table a
    where ('Value1' is null AND a.pt_name like '%')
    OR ( 'Value1' IS NOT NULL AND a.pt_name in ('Value1'));
    
    Plan hash value: 2444798625
     
    --------------------------------------------------------------------------------------------------------
    | Id  | Operation                           | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                    |                  |     5 |  1140 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| MY_TABLE         |     5 |  1140 |     2   (0)| 00:00:01 |
    |   2 |   BITMAP CONVERSION TO ROWIDS       |                  |       |       |            |          |
    |*  3 |    BITMAP INDEX SINGLE VALUE        | BIX_DMED_TERM_01 |       |       |            |          |
    --------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
       3 - access("A"."PT_NAME"='Value1')
    

    Now only one access is done, the one related to the value ‘Value1′ selected by the user.

    Conclusion:

    Tuning a SQL query can be made within different way : modify the physical design for a table (indexes, partitioning), influence the optimizer (Hints) to force an execution plan, modify oracle optimizer database parameters.

    But very often, SQL tuning can be made “simply” by re-written the SQL query. Most of the time, performance problem is due to bad written SQL statement.

    The first advice before to write a SQL query is:
    – always understand the business needs in order to avoid bad interpretations.
    – avoid unnecessary step for oracle optimizer by checking oracle execution plan in details to control the path oracle choose to access the data.
    – avoid writing complex SQL – SQL is a very simple language, don’t forget it.

    Cet article SQL Tuning – Mix NULL / NOT NULL Values est apparu en premier sur Blog dbi services.

    Collaborate 2019

    Jim Marion - Tue, 2019-03-12 16:50
    Collaborate 2019 is just around the corner. San Antonio is one of my favorite conference locations, with the riverwalk right outside the conference center. I will be presenting the following sessions next month at Collaborate:


    I look forward to seeing you there!

    OpenText Enterprise World Europe 2019 – Partner Day

    Yann Neuhaus - Tue, 2019-03-12 16:23

    First day of the #OTEW here at the Austria International Center in Vienna, Guillaume Fuchs and I where invited to assist to the Partner Global sessions.

    Welcome to OTEW Vienna 2019

    img4Mark J. Barrenechea, the OpenText’s CEO & CTO, started the day with some generic topics concerning the global trends and achievements like:

    • More and More partners and sponsors
    • Cloud integration direction
    • Strong security brought to customers
    • AI & machine learning new trend
    • New customer wave made of Gen Z and millennials to consider
    • OpenText #1 in Content Services in 2018
    • Turned to the future with Exabytes goals (high level transfers and storage)
    • Pushing to upgrade to version 16 with most complete Content Platform ever for security and integration
    • Real trend of SaaS with the new OT2 solutions
    OpenText Cloud and OT2 is the future

    img1

    Today the big concern is the sprawl of data, OpenText is addressing this point by centralizing data and flux and create an information advantage. Using Cloud and OT2 SaaS, PaaS will open the business to every thing.

    OT2 is the EIM as a service, it’s an hybrid cloud platform that brings security and scalability to customers solutions which you can integrates to leading applications like O365 Microsoft Teams, Documentum and more, it provides SaaS as well. One place for your data and many connectors to it. More info on it to come, stay tuned.

    Smart View is the default

    Smart View is the new OpenText UI default for every components such as D2 for documentum, SAP integration, Extended ECM, SuccessFactor and so on.

    img3img5

    Documentum and D2

    New features:

    • Add documents to subfodlers without opening folder first
    • Multi-items download -> Zip and download
    • Download phases displayed in progress bar
    • Pages editable inline with smart view
    • Possibility to add widgets in smart view
    • Workspace look improved in smart view
    • Image/media display improved: Galery View with sorting, filters by name
    • Threaded discussion in smart view look and feel
    • New permission management visual representation
    • Mobile capabilities
    • Integrated in other lead applications (Teams, SAP, Sharepoint and so on…)

    img6img7

    OpenText Roadmap

    OpenText trends are the following:

    • New UI for products: Smart View: All devices, well integrated to OT2
    • Content In Context
      • Embrace Office 365, with Documentum integration
      • Integration of documentum in SAP
    • Push to Cloud
      • More cloud based product: Docker, Kubernetes
      • Run applications anywhere with OpenText Cloud, Azure, AWS, Google
      • SaaS Applications & Services on OT2
    • Line Of Business
      • SAP applications
      • LoB solutions like SuccessFactors
      • Platform for industry solutions like Life Science, Engineering and Government
    • Intelligent Automation
      • Information extraction with machine learning (Capture)
      • Cloud capture apps for SAP, Salesforce, etc
      • Drive automation with Document Generation
      • Automatic sharing with OT Core
      • Leverage Magellan and AI
      • Personal Assistant / Bots
    • Governance:
      • Smart Compliance
      • GDPR and DPA ready
      • Archiving and Application decommissioning
    Conclusion

    After this first day at OTEW we can see that OpenText is really pushing on new UI with Smart View, as well as centralized services and storage with OT2 and OpenText Cloud solutions. Content Services will become the angular stone for all content storage with plugged interfaces and components provided by the OT2 platform.

    Cet article OpenText Enterprise World Europe 2019 – Partner Day est apparu en premier sur Blog dbi services.

    Pages

    Subscribe to Oracle FAQ aggregator