Feed aggregator

The Oracle Flow Builder difference

Anthony Shorten - Wed, 2015-12-02 20:15

One of the key features of the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities is the support for Oracle Flow Builder. Those not familiar with this product. It is a component and flow management tool to quickly build and maintain testing assets with reduced skills required.

Oracle Flow Builder is not a new product to Oracle. Previously it was exclusively part of the successful product Oracle eBusiness Suite. It was developed to automate the testing of that product to reduce time to market and reduce the risk in implementation and upgrades. It was developed for internal quality assurance originally but we released to success to Oracle eBusiness Suite customers. Customer and QA teams reported up to 70% saving on testing time.

Keen to realize these savings across other products, Oracle moved the Oracle Flow Builder product to be part of the functional testing part of the Oracle Application Testing Suite. This is where our pack came into existence. We had the components available but originally no way to allow customers to quickly adopt the components into flows. It is possible in OpenScript to code a set of component calls, this required higher levels of skills, but quickly it was apparent that Oracle Flow Builder was the solution.

The two development teams worked closely together to allow the pack to be the first product set outside of Oracle eBusiness Suite to support Oracle Flow Builder. This relationship offers great advantages for the solution:

  • Oracle Flow Builder allows non-development resources to build and maintain components and testing flows.
  • Oracle Flow Builder includes a component management toolset to manage the availability and use of components for testing.
  • Oracle Flow Builder including a flow management toolset to allow testers to orchestrate components into testing flows representing different scenarios of business flows. This allows modelling of business flows much easier.
  • Oracle Flow Builder is a team based solution running on a server rather than individual desktops. Typically testing tools, even OpenScript, run on individual desktops which means team development is much harder.
  • Oracle Flow Builder is tightly integrated with Oracle's other testing products in the Oracle Application Testing Suite family to implement testing planning, testing automation and load testing.

Oracle Flow Builder is a key part of our testing infrastructure and it is also a key part for the testing solution for Oracle Utilities products.

For training on Oracle Flow Builder you can use the Youtube Oracle Learning Library training or use the Oracle Learning Library training.

Oracle Functional/Load Testing Advanced Pack for Oracle Utilities 5.0.0.1.0 Released

Anthony Shorten - Wed, 2015-12-02 17:28

A new version of the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities is now available for download from Oracle Software Delivery Cloud.

Look for Oracle Functional Testing Advanced Pack for Oracle Utilities, looking for Version 5.0.0.1.0, as this download includes support for Functional and Load testing.

This new version of the pack now supports a bigger range of Oracle Utilities products and a bigger range of versions. It also includes a component generator and component verifier to allow implementations to build custom components quickly from the meta data.

This new version of the pack supports the following releases:

  • Oracle Utilities Customer Care And Billing 2.4.0.3 (new)
  • Oracle Utilities Customer Care And Billing 2.5.0.1
  • Oracle Utilities Mobile Workforce Management 2.2.0.3 (updated)
  • Oracle Real Time Scheduler 2.2.0.3 (updated)
  • Oracle Utilities Application Framework 4.2.0.3 (new)
  • Oracle Utilities Application Framework  4.3.0.1
  • Oracle Utilities Meter Data Management 2.1.0.3 (new)
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.1.0.3 (new)
  • Oracle Utilities Work And Asset Management 2.1.1 (updated)
  • Oracle Utilities Operational Device Management 2.1.1 (new)

The pack is content for Oracle Application Testing Suite for Functional Testing and Load Testing.

IBM Containers running Spring Boot Applications with IBM Bluemix

Pas Apicella - Wed, 2015-12-02 16:56
There is now a new command line plugin for IBM containers on Bluemix so you can push and run docker images using CF CLI itself. The steps below show you how to set this up and I use a basic spring boot application as a docker image to test this out.

Steps

Take a note of the docker local host IP. In this example it was as follows, as I test my docker image on my laptop prior to pushing it to Bluemix.

-> docker is configured to use the default machine with IP 192.168.99.100

1. Install the latest CF command line, I used the following version.

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf --version
cf version 6.14.0+2654a47-2015-11-18


https://github.com/cloudfoundry/cli

2. Install IBM Containers Cloud Foundry plug-in

pasapicella@pas-macbook-pro:~$ cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-mac

**Attention: Plugins are binaries written by potentially untrusted authors. Install and use plugins at your own risk.**

Do you want to install the plugin https://static-ice.ng.bluemix.net/ibm-containers-mac? (y or n)> y

Attempting to download binary file from internet address...
9314192 bytes downloaded...
Installing plugin /var/folders/rj/5r89y5nd6pd4c9hwkbvdp_1w0000gn/T/ibm-containers-mac...
OK
Plugin IBM-Containers v0.8.788 successfully installed.


Note: Default plugin directory as follows

$HOME/.cf/plugins


3. Login to IBM Containers

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS$ cf ic login
Client certificates are being retrieved from IBM Containers...
Client certificates are being stored in /Users/pasapicella/.ice/certs/...
Client certificates are being stored in /Users/pasapicella/.ice/certs/containers-api.ng.bluemix.net/0bcbcada-bd11-4372-b416-955dff3078a1...
OK
Client certificates were retrieved.

Deleting old configuration file...
Checking local Docker configuration...
OK

Authenticating with registry at host name registry.ng.bluemix.net
OK
Your container was authenticated with the IBM Containers registry.
Your private Bluemix repository is URL: registry.ng.bluemix.net/apples

You can choose from two ways to use the Docker CLI with IBM Containers:

Option 1: This option allows you to use "cf ic" for managing containers on IBM Containers while still using the Docker CLI directly to manage your local Docker host.
    Use this Cloud Foundry IBM Containers plug-in without affecting the local Docker environment:

    Example Usage:
    cf ic ps
    cf ic images

Option 2: Use the Docker CLI directly. In this shell, override the local Docker environment to connect to IBM Containers by setting these variables. Copy and paste the following commands:
    Note: Only Docker commands followed by (Docker) are supported with this option.

     export DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
     export DOCKER_CERT_PATH=/Users/pasapicella/.ice/certs/containers-api.ng.bluemix.net/0bcbcada-bd11-4372-b416-955dff3078a1
     export DOCKER_TLS_VERIFY=1

    Example Usage:
    docker ps
    docker images
4. View docker images

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS$ cf ic images
REPOSITORY                                        TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
registry.ng.bluemix.net/ibm-mobilefirst-starter   latest              5996bb6e51a1        6 weeks ago         770.4 MB
registry.ng.bluemix.net/ibm-node-strong-pm        latest              ef21e9d1656c        8 weeks ago         528.7 MB
registry.ng.bluemix.net/ibmliberty                latest              2209a9732f35        8 weeks ago         492.8 MB
registry.ng.bluemix.net/ibmnode                   latest              8f962f6afc9a        8 weeks ago         429 MB
registry.ng.bluemix.net/apples/etherpad_bluemix   latest              131fd7a39dff        11 weeks ago        570 MB


5. Clone application to run as docker image

$ git clone https://github.com/spring-guides/gs-rest-service.git

6. Create a file called Dockerfile as follows in the "complete" directory

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cat Dockerfile
FROM java:8
VOLUME /tmp
ADD target/gs-rest-service-0.1.0.jar app.jar
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]


7. Package the demo

$ mvn package

8. Build docker image

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ docker build -t gs-rest-service .
Sending build context to Docker daemon 13.44 MB
Step 1 : FROM java:8
8: Pulling from library/java
1565e86129b8: Pull complete
a604b236bcde: Pull complete
5822f840e16b: Pull complete
276ac25b516c: Pull complete
5d32526c1c0e: Pull complete
0d61f7a71c59: Pull complete
16952eac0a64: Pull complete
2fb3388c8597: Pull complete
ca603b247c8e: Pull complete
1785f2bc7c99: Pull complete
40e61a6ae215: Pull complete
32f541968fe6: Pull complete
Digest: sha256:52a1b487ed34f5a76f88a336a740cdd3e7b4486e264a3e69ece7b96e76d9f1dd
Status: Downloaded newer image for java:8
 ---> 32f541968fe6
Step 2 : VOLUME /tmp
 ---> Running in 030f739777ac
 ---> 22bf0f9356a1
Removing intermediate container 030f739777ac
Step 3 : ADD target/gs-rest-service-0.1.0.jar app.jar
 ---> ac590c46b73b
Removing intermediate container 9790c39eb1f7
Step 4 : RUN bash -c 'touch /app.jar'
 ---> Running in e9350ddebb75
 ---> 697d245c6afb
Removing intermediate container e9350ddebb75
Step 5 : ENTRYPOINT java -Djava.security.egd=file:/dev/./urandom -jar /app.jar
 ---> Running in 42fc22473930
 ---> df853abfea57
Removing intermediate container 42fc22473930
Successfully built df853abfea57


9. Run locally

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ docker run --name gs-rest-service -p 80:8080 -d -t gs-rest-service
a392aa15da81fb4ca6c16a6307e0bd1c6b22f9a046228f1fc477d3fe12e15f16


10. Test as follows

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers$ curl http://192.168.99.100/greeting
{"id":1,"content":"Hello, World!"}


11. PUSH TO BLUEMIX AS follows

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ docker tag gs-rest-service registry.ng.bluemix.net/apples/gs-rest-service
pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ docker push registry.ng.bluemix.net/apples/gs-rest-service
The push refers to a repository [registry.ng.bluemix.net/apples/gs-rest-service] (len: 1)
Sending image list
Pushing repository registry.ng.bluemix.net/apples/gs-rest-service (1 tags)
Image 5822f840e16b already pushed, skipping
Image 276ac25b516c already pushed, skipping
Image 5d32526c1c0e already pushed, skipping
Image a604b236bcde already pushed, skipping
Image 1565e86129b8 already pushed, skipping
Image 0d61f7a71c59 already pushed, skipping
Image 2fb3388c8597 already pushed, skipping
Image 16952eac0a64 already pushed, skipping
Image ca603b247c8e already pushed, skipping
Image 1785f2bc7c99 already pushed, skipping
Image 40e61a6ae215 already pushed, skipping
Image 32f541968fe6 already pushed, skipping
22bf0f9356a1: Image successfully pushed
ac590c46b73b: Image successfully pushed
697d245c6afb: Image successfully pushed
df853abfea57: Image successfully pushed
Pushing tag for rev [df853abfea57] on {https://registry.ng.bluemix.net/v1/repositories/apples/gs-rest-service/tags/latest}


12. List all allocated IP

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf ic ip list
Number of allocated public IP addresses:  2

IpAddress        ContainerId
134.168.13.83
134.168.15.105


13. Create a container from the uploaded image

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf ic run -p 8080 --memory 512 --name pas-sb-container registry.ng.bluemix.net/apples/gs-rest-service:latest
b1fe3159-0c19-4d54-b0f5-cdd938618deb


14. Assign IP to container

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf ic ip bind 134.168.13.83 pas-sb-container
OK
The IP address was bound successfully.


15. Verify it's running

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf ic ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                  PORTS                          NAMES
3794802b-b0c                  ""                  4 minutes ago       Running 3 minutes ago   134.168.13.83:8080->8080/tcp   pas-sb-container

16. Invoke as follows

$ curl http://134.168.13.83:8080/greeting


More Information

Plugin Reference ->

https://www.eu-gb.bluemix.net/docs/containers/container_cli_reference_cfic.html

Installing cf ci plugin ->

https://www.eu-gb.bluemix.net/docs/containers/doc/container_cli_cfic.html

Categories: Fusion Middleware

Channel Based Architectures

Anthony Shorten - Tue, 2015-12-01 22:43

Over the last few releases of the Oracle Utilities Application Framework, architecture changes have been introduced to move towards a more channel based architectures. The idea is that a channel is an interface method that allows access to the product. One of the key principles is to divide the traffic interfaces into different channels to allow for those channels to be sized, managed and secured using different methods optimized for that channel.

In the Oracle Utilities Application Framework we have implemented the following channels:

  • Online Server - This is the Web based interface that allows online users access to the OLTP functionality of the product.
  • Integration Server - This is a web service based interface supporting SOAP, MDB, SOA, REST etc to allow interfaces to integrate to the product.
  • Batch Server - This is a command Oracle Coherence based channel to support background or batch processing.
  • Mobile Server (New) - This is a new channel we have started to introduce to progressive products to support a connected and disconnected mobile solution.

This channel based architecture is designed with the following principles:

  • Each channel can be installed separately or as a composite installation. In the latest version of the Oracle Utilities Application Framework we introduced the idea of an installation role. When you install the product you decide its role (or roles) which enables a channel for that installation. The role can be decided at installation time but can be changed, for flexibility, after installation without the need for re-installation. This also allows for what we call disparate installations which are installations across many hosts (for availability/scalability). We introduced an environment identifier which allows products like Oracle Enterprise Manager to recognize virtual disparate installations.
  • Each channel can be deployed individually. This allows for flexibility in implementing Oracle's Maximum Availability Architecture based solutions to support high levels of availability and scalability. This means the channel can be clustered individually or implemented in component clusters as well (depending on your preferences).
  • Each channel can be secured appropriately using the technology available to you. This essentially means you can use the facilities in the J2EE Web Application Server and associated security products to secure each channel appropriately. For example, the Integration Server which is Web Services based can be secured additionally with Oracle Web Services Manager and the policies supported by the Oracle WebLogic Server. We support the security connectors that Oracle supplies for its servers and also the advanced security offered by other Oracle Security products such as Oracle Identity Management Suite.
  • Each channel can be sized and ring fenced using features like Database Resource Manager and Oracle WebLogic Work Managers. This allows each channel to be controlled at the architectural level to isolate and preserve capacity. This is important in cloud implementations for example to ensure that capacity is maintained. This also allows flexibility in processing capacity with rules for allocating capacity able to be specified at the channel level or lower.
  • Oracle Enterprise Manager and the Application Management Pack for Oracle Utilities have been updated to reflect this new architecture to allow you to track and monitor appropriate metrics for the channel.

Over the next few releases, additional facilities will be included to enhance and reinforce the channel based architecture we have implemented. We feel that it will make implementations easier to plan and software easier to manage in the long term.

Database Resource Plan Support

Anthony Shorten - Tue, 2015-12-01 16:59

Database resource plans in the database allow users and sessions limits to be managed to allow for finite control of resource usage on machines. They allow database administrators to set resource limits and configure resource sharing so that transactions, users and channels can have appropriate access to finite resources on the machine.

For Oracle Utilities products it is now possible to use database resource plans to control resources. The support varies with each version of Oracle Utilities Application Framework but the use cases you can use this for are as follows:

  • In Oracle Utilities Application Framework V4.x we introduced a different database user for each channel (online, web services and batch). These can be used in the database resource plan definition to control resources at a user level. In older versions of Oracle Utilities Application Framework (V2.x) we supply common users but it was possible, using templates, to implement separate users for at least online and batch so Database Resource Plans can be implemented there as well. This means that you can setup plans at the user level to control resources if necessary.
  • In all versions of Oracle Utilities Application Framework we have a separate Read-Only user. This is used typically for reporting tools. You can attach resource plans to this user to limit the resources used by reporting tools (for example, CPU and time limits).
  • In Oracle Utilities Application Framework V4.x we introduced additional session values to display module, action, client_identifier and client_info for active sessions. It is possible to also use these values to setup advanced resource plans if desired.

Setting up database resource plans is done at the database level using database tools such as SQL*Plus, SQL Developer, EM Express or Oracle Enterprise Manager. A whitepaper has been written to explain the facility as well as a example plan which is available from My Oracle Support at Using the Database Resource Manager to Manage Database Server Resources (Doc ID 2067783.1).

How to fix Putty timeout issue

Arun Bavera - Tue, 2015-12-01 14:00

Set this in Putty/SuperPutty:

image

Categories: Development

Oracle Management Cloud – An Overview

Debu Panda - Tue, 2015-12-01 07:30
Most organizations are transforming themselves to digital enterprises, and IT plays a key role in this transformation. The ways applications are built, delivered, and consumed have significantly changed in the last few years. Organizations have adopted agile methodologies and delivering applications very rapidly, adopted hybrid Cloud, and many of the applications are now consumed from mobile devices. This transition poses a lot of challenges to IT Organizations, and they need new generation tools that can manage their applications and infrastructure.


Oracle Management Cloud is a suite of next generation integrated monitoring, management, and analytics solution for IT organizations that enables a real-time, collaborative environment in which all stakeholders have a clear end-to-end view to the applications and technologies that support business services. Oracle Management Cloud is a part of Oracle Cloud (platform) offerings.

Top Concerns for DevOps/IT Ops

The following figure shows the top concerns for DevOps/IT Ops.




Many organizations lose a lot of revenue and credibility due to unplanned application outages and they spend a lot of expensive hours in war rooms instead of focusing on innovation. Oracle Management Cloud aims to remove the unnecessary time spent in War Rooms by eliminating multiple information silos in management that exist across end user, applications, infrastructure, and machine data.

Oracle Management Cloud is designed for modern heterogeneous IT environment running either Oracle or non-Oracle software/infrastructure. It supports applications either deployed in on-premises, private cloud, Oracle Public Cloud or third-party cloud services.

Following are the three services available now:


I have summarized the high-level features supported by these services. I will write more about these services in future blogs.


Cloud Service
Persona
Features
DevOps
IT Ops
Developer
App Support
  • End User Monitoring
  • Server Request Performance
  • Application Topologies
  • Integrated Log Visibility
  • Integration with Analytics

DevOps
IT Ops (DBA, MW Admin, Sys Admin)
Developer
App Support
Business
  • Light Touch Data Aggregation of all kinds of machine data
  • Topology-Aware Exploration
  • Machine Learning
  • APM Integration
  • Dashboards

Business Analyst
Capacity manager
IT Ops
DevOps
  • Analyze Resource Usage
  • Discover Systemic or common performance problems
  • Plan for the Future



Key Benefits
 Following are some of the few key benefits that you can get from Oracle Management Cloud:
  • Gain 360-degree insight into the performance, availability, and capacity of your applications and infrastructure investments
  •  Find and fix application issues fast by eliminating the unnecessary complex manual application monitoring processes and multiple toolsets
  • Improve efficiency of IT organizations by reducing dependence on large groups of IT staff participating in war rooms
  • Search, explore, and correlate machine data data to troubleshoot problems faster, derive operational and business insight, and make better decisions.
  • Makes IT organizations proactive by identifying systemic problems and capacity issues
  • Reduce Cost of Operations as these services are offered in the cloud and customers do not have to maintain any underlying infrastructure 

Resources
Here are links to few resources if you want to learn about Oracle Management Cloud

Open Source Cool Web App with PL/SQL and Formspider

Gerger Consulting - Tue, 2015-12-01 01:21
As an independent consultant, Nicholas Mwaura needed a Formspider demo application so that he can show the best of the product to his potential clients.

However, all the sample applications on the Formspider web site were developer oriented. We had no application online which consultants, IT managers can use to impress other stakeholders with Formspider. (Facepalm)

Nicholas decided to built this demo application himself even though this was going to be the first time he is going to work with Formspider. The tool encouraged him that much.

Think about this for a minute. When we learn a new tool, most of us are happy if we build a Hello World application as our first one. Nicholas built a demo application to demonstrate the best of Formspider. This speaks volumes about his high technical skills and how Formspider empowers Oracle Forms and PL/SQL developers.

You can watch the entire webinar below. The webinar consists of four parts:

1) Introduction to Formspider by me

2) Nicholas Mwaura on Formspider

3) Development of the Demo Appliaction

4) Questions and Answers



Below are the slides Nicholas used during the webinar:



Open Source Demo Application



Nicholas is sharing his work as an open source application with the Formspider community. This is indeed very nice him and we are much indebted to him for his generosity.

You can use the application at this link.

You can download the source code of the application from this link.

If you’d like to contribute to this open source project, here is the project’s GitHub page.

Yalim K. Gerger

Founder
Categories: Development

Developing with Oracle MAF and Oracle ADF Business Components - The REST Edition

Shay Shmeltzer - Mon, 2015-11-30 18:24

When Oracle ADF Mobile was released over 3 years ago, one of the first blogs I created on this topic showed how to leverage Oracle ADF Business Components to access a server database and create a mobile front end on top of it.

Since then both frameworks have matured, and we learned some best practices doing implementations internally and for customers. Today I'm going to show you a better way to build this type of applications, specifically leveraging REST as the communication protocol between the ADF backend and the Oracle MAF front end. REST based integration performs much better than SOAP for this mobile scenario, and as you'll see development is as simple.

Specifically I'm leveraging the Oracle A-Team Mobile Persistence Accelerator (AMPA) JDeveloper Extension- this extension simplifies MAF's interacting with REST backends, and has some cool extra features if your  REST services are based on ADF BC.

I used JDeveloper 12.2.1 to expose REST services from my ADF Business Components.  If you are not familiar with how to do that, see this blog on exposing ADF BC as REST services, and then this blog about enabling CORS for ADF Business Components.

The video below picks up the same application (Application14) and continues from where the previous two ended. 

Now let's see the MAF development part:

As you can see, it is quite easy to create your MAF UI. The AMPA extension does a lot of work for you making the access to the REST backend as easy as possible. (thanks goes out to Steven Davelaar).

The AMPA extension can also generate a complete UI for you - so you can give that wizard a try to if you are  looking for even more productivity. 

Categories: Development

Get Proactive - Follow the Oracle Support Events Calendar

Joshua Solomin - Mon, 2015-11-30 16:36
See Upcoming Support Events with the Get Proactive Events Calendar
Web application that automatically tracks Advisor Webcasts / newsletter releases / Support training events and also synchronizes events you select into your calendar

Oracle Support sponsors a variety of activities (like our popular Advisor Webcasts) to help customers work more effectively with their Oracle products. Follow our Event Calendar to to stay up to date on upcoming Webcasts and events.

The web app allows you to filter activities by product line, making it easier to see the most relevant items. As new events are added to the schedule, the calendar updates automatically to include sessions, dates, and times. For consistency displayed times will automatically adjust based on your time zone.

The calendar is built using the standard iCalendar format, so you can automatically integrate the calendar data directly in Outlook and Thunderbird. Follow the instructions below to set up your integration and take advantage.

Calendar
Click the image to visit the app
Calendar Integration
  1. Go to the calendar link here.
  2. Follow the instructions on the page to add the calendar to your email/calendar client.

We've written a brief document detailing some of the features for the calendar. Visit Document 125716.1 to find out more.

Watch featured OTN Virtual Technology Summit Replay Sessions

OTN TechBlog - Mon, 2015-11-30 16:08

Today we are featuring a session from each OTN Virtual Technology Summit Replay Group.  See session titles and abstracts below.  Watch right away and then join the group to interact with other community members and stay up to date on when NEW content is coming!

Best Practices for Migrating On-Premises Databases to the Cloud

By Leighton Nelson, Oracle ACE
Oracle Multitenant is helping organizations reduce IT costs by simplifying database consolidation, provisioning, upgrades, and more. Now you can combine the advantages of multitenant databases with the benefits of the cloud by leveraging Database as a Service (DBaaS). In this session, you’ll learn about key best practices for moving your databases from on-premises environments to the Oracle Database Cloud and back again.

What's New for Oracle and .NET - (Part 1)
By Alex Keh, Senior Principal Product Manager, Oracle
With the release of ODAC 12c Release 4 and Oracle Database 12c, .NET developers have many more features to increase productivity and ease development. These sessions explore new features introduced in recent releases with code and tool demonstrations using Visual Studio 2015.

Docker for Java Developers
By Roland Huss, Principal Software Engineer at Red Hat
Docker, the OS-level visualization platform, takes the IT world by storm. In this session, we will see what features Docker has for us Java developers. It is now possible to create truly isolated, self-contained and robust integration tests in which external dependencies are realized as Docker containers. Docker also changes the way we ship applications in that we are not only deploying application artifacts like WARs or EARs but also their execution contexts. Besides elaborating on these concepts and more, this presentation will focus on how Docker can best be integrated into the Java build process by introducing a dedicated Docker Maven plugin which is shown in a live demo.

Debugging Weblogic Authentication
By Maarten Smeets, Senior Oracle SOA / ADF Developer, AMIS
Enterprises often centrally manage login information and group memberships (identity). Many systems use this information to achieve Single Sign On (SSO) functionality, for example. Surprisingly, access to the Weblogic Server Console is often not centrally managed. This video explains why centralizing management of these identities not only increases security, but can also reduce operational cost and even increase developer productivity. The video demonstrates several methods for debugging authentication using an external LDAP server in order to lower the bar to apply this pattern. This technically-oriented presentation will be especially useful for people working in operations who are responsible for managing Weblogic Servers.

Designing a Multi-Layered Security Strategy
By Glenn Brunette, Cybersecurity, Oracle Public Sector, Oracle
Security is a concern of every IT manager and it is clear that perimeter defense, trying to keep hackers out of your network, is not enough. At some point someone with bad intentions will penetrate your network and to prevent significant damage it is necessary to make sure there are multiple layers of defense. Hear about Oracle’s defense in depth for data centers including some new and unique security features built into the new SPARC M7 processor.

Licensing Cloud Control

Laurent Schneider - Mon, 2015-11-30 12:08

I just read the Enterprise Manager Licensing Information User Manual today. They are a lot of packs there, and you may not even know that autodiscovering targets is part of the lifecycle management pack or that blackouts are part of the diagnostic pack.

Have a look

RAM is the new disk – and how to measure its performance – Part 3 – CPU Instructions & Cycles

Tanel Poder - Mon, 2015-11-30 00:45

If you haven’t read the previous parts of this series yet, here are the links: [ Part 1 | Part 2 ].

A Refresher

In the first part of this series I said that RAM access is the slow component of a modern in-memory database engine and for performance you’d want to reduce RAM access as much as possible. Reduced memory traffic thanks to the new columnar data formats is the most important enabler for the awesome In-Memory processing performance and SIMD is just icing on the cake.

In the second part I also showed how to measure the CPU efficiency of your (Oracle) process using a Linux perf stat command. How well your applications actually utilize your CPU execution units depends on many factors. The biggest factor is your process’es cache efficiency that depends on the CPU cache size and your application’s memory access patterns. Regardless of what the OS CPU accounting tools like top or vmstat may show you, your “100% busy” CPUs may actually spend a significant amount of their cycles internally idle, with a stalled pipeline, waiting for some event (like a memory line arrival from RAM) to happen.

Luckily there are plenty of tools for measuring what’s actually going on inside the CPUs, thanks to modern processors having CPU Performance Counters (CPC) built in to them.

A key derived metric for understanding CPU-efficiency is the IPC (instructions per cycle). Years ago people were actually talking about the inverse metric CPI (cycles per instruction) as on average it took more than one CPU cycle to complete an instruction’s execution (again, due to the abovementioned reasons like memory stalls). However, thanks to today’s superscalar processors with out-of-order execution on a modern CPU’s multiple execution units – and with large CPU caches – a well-optimized application can execute multiple instructions per a single CPU cycle, thus it’s more natural to use the IPC (instructions-per-cycle) metric. With IPC, higher is better.

Here’s a trimmed snippet from the previous article, a process that was doing a fully cached full table scan of an Oracle table (stored in plain old row-oriented format):

Performance counter stats for process id '34783':

      27373.819908 task-clock                #    0.912 CPUs utilized
    86,428,653,040 cycles                    #    3.157 GHz                     [33.33%]
    32,115,412,877 instructions              #    0.37  insns per cycle
                                             #    2.39  stalled cycles per insn [40.00%]
    76,697,049,420 stalled-cycles-frontend   #   88.74% frontend cycles idle    [40.00%]
    58,627,393,395 stalled-cycles-backend    #   67.83% backend  cycles idle    [40.00%]
       256,440,384 cache-references          #    9.368 M/sec                   [26.67%]
       222,036,981 cache-misses              #   86.584 % of all cache refs     [26.66%]

      30.000601214 seconds time elapsed

The IPC of the above task is pretty bad – the CPU managed to complete only 0.37 instructions per CPU cycle. On average every instruction execution was stalled in the execution pipeline for 2.39 CPU cycles.

Note: Various additional metrics can be used for drilling down into why the CPUs spent so much time stalling (like cache misses & RAM access). I covered the typical perf stat metrics in the part 2 of this series so won’t go in more detail here.

Test Scenarios

The goal of my experiments was to measure the number CPU-efficiency of different data scanning approaches in Oracle – on different data storage formats. I focused only on data scanning and filtering, not joins or aggregations. I ensured that everything would be cached in Oracle’s buffer cache or in-memory column store for all test runs – so disk IO was not a factor here (again, read more about my test environment setup in part 2 of this series).

The queries I ran were mostly variations of this:

SELECT COUNT(cust_valid) FROM customers_nopart c WHERE cust_id > 0

Although I was after testing the full table scanning speeds, I also added two examples of scanning through the entire table’s rows via index range scans. This allows me to show how inefficient index range scans can be when accessing a large part of a table’s rows even when all is cached in memory. Even though you see different WHERE clauses in some of the tests, they all are designed so that they go through all rows of the table (just using different access patterns and code paths).

The descriptions of test runs should be self-explanatory:

1. INDEX RANGE SCAN BAD CLUSTERING FACTOR

SELECT /*+ MONITOR INDEX(c(cust_postal_code)) */ COUNT(cust_valid)
FROM customers_nopart c WHERE cust_postal_code > '0';

2. INDEX RANGE SCAN GOOD CLUSTERING FACTOR

SELECT /*+ MONITOR INDEX(c(cust_id)) */ COUNT(cust_valid)
FROM customers_nopart c WHERE cust_id > 0;

3. FULL TABLE SCAN BUFFER CACHE (NO INMEMORY)

SELECT /*+ MONITOR FULL(c) NO_INMEMORY */ COUNT(cust_valid) 
FROM customers_nopart c WHERE cust_id > 0;

4. FULL TABLE SCAN IN MEMORY WITH WHERE cust_id > 0

SELECT /*+ MONITOR FULL(c) INMEMORY */ COUNT(cust_valid) 
FROM customers_nopart c WHERE cust_id > 0;

5. FULL TABLE SCAN IN MEMORY WITHOUT WHERE CLAUSE

SELECT /*+ MONITOR FULL(c) INMEMORY */ COUNT(cust_valid) 
FROM customers_nopart c;

6. FULL TABLE SCAN VIA BUFFER CACHE OF HCC QUERY LOW COLUMNAR-COMPRESSED TABLE

SELECT /*+ MONITOR */ COUNT(cust_valid) 
FROM customers_nopart_hcc_ql WHERE cust_id > 0

Note how all experiments except the last one are scanning the same physical table just with different options (like index scan or in-memory access path) enabled. The last experiment is against a copy of the same table (same columns, same rows), but just physically formatted in the HCC format (and fully cached in buffer cache).

Test Results: Raw Numbers

It is not enough to just look into the CPU performance counters of different experiments, they are too low level. For the full picture, we also want to know how much work (like logical IOs etc) the application was doing and how many rows were eventually processed in each case. Also I verified that I did get the exact desired execution plans, access paths and that no physical IOs or other wait events happened using the usual Oracle metrics (see the log below).

Here’s the experiment log file with full performance numbers from SQL Monitoring reports, Snapper and perf stat:

I also put all these numbers (plus some derived values) into a spreadsheet. I’ve pasted a screenshot of the data below for convenience, but you can access the entire spreadsheet with its raw data and charts here (note that the spreadsheet has multiple tabs and configurable pivot charts in it):

Raw perf stat data from the experiments:

oracle scan test results.png

Now let’s plot some charts!

Test Results: CPU Instructions

Let’s start from something simple and gradually work our way deeper. I will start from listing the task-clock-ms metric that shows the CPU time usage of the Oracle process in milliseconds for each of my test table scans. This metric comes from the OS-level and not from within the CPU:

task-clock-ms.pngCPU time used for scanning the dataset (in milliseconds)

As I mentioned earlier, I added two index (full) range scan based approaches for comparison. Looks like the index-based “full table scans” seen in first and second columns are using the most CPU-time as the OS sees it (~120 and close to 40 seconds of CPU respectively).

Now let’s see how many CPU instructions (how much work “requested” from CPU) the Oracle process executed for scanning the same dataset using different access paths and storage formats:

oracle table scan instructions clean.pngCPU instructions executed for scanning the dataset

Wow, the index-based approaches seem to be issuing multiple times more CPU instructions per query execution than any of the full table scans. Whatever loops the Oracle process is executing for processing the index-based query, it runs more of them. Or whatever functions it calls within those loops, the functions are “fatter”. Or both.

Let’s look into an Oracle-level metric session logical reads to see how many buffer gets it is doing:

oracle buffer gets clean.pngBuffer gets done for a table scan

 

Wow, using the index with bad clustering factor (1st bar) causes Oracle to do over 60M logical IOs, while the table scans do around 1.6M of logical IOs each. Retrieving all rows of a table via an index range scan is super-inefficient, given that the underlying table size is only 1613824 blocks.

This inefficiency is due to index range scans having to re-visit the same datablocks multiple times (up to one visit per row, depending on the clustering factor of the index used). This would cause another logical IO and use more CPU cycles for each buffer re-visit, except in cases where Oracle has managed to keep a buffer pinned since last visit. The index range scan with a good clustering factor needs to do much fewer logical IOs as given the more “local” clustered table access pattern, the re-visited buffers are much more likely found already looked-up and pinned (shown as the buffer is pinned count metric in V$SESSTAT).

Knowing that my test table has 69,642,625 rows in it, I can also derive an average CPU instructions per row processed metric from the total instruction amounts:

instructions per row.png

The same numbers in tabular form:

Screen Shot 2015-11-30 at 00.38.12

Indeed there seem to be radical code path differences (that come from underlying data and cache structure differences) that make an index-based lookup use thousands of instructions per row processed, while an in-memory scan with a single predicate used only 102 instructions per row processed on average. The in-memory counting without any predicates didn’t need to execute any data comparison logic in it, so could do its data access and counting with only 43 instructions per row on average.

So far I’ve shown you some basic stuff. As this article is about studying the full table scan efficiency, I will omit the index-access metrics from further charts. The raw metrics are all available in the raw text file and spreadsheet mentioned above.

Here are again the buffer gets of only the four different full table scan test cases:

oracle buffer gets table scan only.pngBuffer gets done for full table scans

All test cases except the HCC-compressed table scan cause the same amount of buffer gets (~1.6M) as this is the original table’s size in blocks. The HCC table is only slightly smaller – didn’t get great compression with the query low setting.

Now let’s check the number CPU instructions executed by these test runs:

oracle table scan only instructions.pngCPU instructions executed for full table scans

Wow, despite the table sizes and number of logical IOs being relatively similar, the amount of machine code the Oracle process executes is wildly different! Remember, all that my query is doing is just scanning and filtering the data followed with a basic COUNT(column) operation – no additional sorting, joining is done. The in-memory access paths (column 3 & 4) get away with executing much fewer CPU instructions than the regular buffered tables in row-format and HCC format (columns 1 & 2 in the chart).

All the above shows that not all logical IOs are equal, depending on your workload and execution plans (how many block visits, how many rows extracted per block visit) and underlying storage formats (regular row-format, HCC in buffer cache or compressed columns in In-Memory column store), you may end up doing a different amount of CPU work per row retrieved for your query.

This was true before the In-Memory option and even more noticeable with the In-Memory option. But more about this in a future article.

Test Results: CPU Cycles

Let’s go deeper. We already looked into how many buffer gets and CPU instructions the process executed for the different test cases. Now let’s look into how much actual CPU time (in form of CPU cycles) these tests consumed. I added the CPU cycles metric to instructions for that:

instructions and cycles.pngCPU instructions and cycles used for full table scans

Hey, what? How come the regular row-oriented block format table scan (TABLE BUFCACHE) takes over twice more CPU cycles compared to its instructions executed?

Also, how come all the other table access methods use noticeably less CPU cycles than the number of instructions they’ve executed?

If you paid attention to this article (and previous ones) you’ll already know why. In the 1st example (TABLE BUFCACHE) the CPU must have been “waiting” for something a lot, instructions having spent multiple cycles “idle”, stalled in the pipeline, waiting for some event or necessary condition to happen (like a memory line arriving from RAM).

For example, if you are constantly waiting for the “random” RAM lines you want to access due to inefficient memory structures for scanning (like Oracle’s row-oriented datablocks), the CPU will be bottlenecked by RAM access. The CPU’s internal execution units, other than the load-store units, would be idle most of the time. The OS top command would still show you 100% utilization of a CPU by your process, but in reality you could squeeze much more out of your CPU if it didn’t have to wait for RAM so much.

In the other 3 examples above (columns 2-4), apparently there is no serious RAM (or other pipeline-stalling) bottleneck as in all cases we are able to use the multiple execution units of modern superscalar CPUs to complete more than one instruction per CPU cycle. Of course more improvements might be possible, but more about this in a following post.

For now I’ll conclude this (lengthy) post with one more chart with the fundamental derived metric instructions per cycle (IPC):

instructions per cycle.png

The IPC metric is derived from the previously shown instructions and CPU cycles metrics by a simple division. Higher IPC is better as it means that your CPU execution units are more utilized, it gets more done. However, as IPC is a ratio, you should never look into the IPC value alone, always look into it together with instructions and cycles metrics. It’s better to execute 1 Million instructions with IPC of 0.5 than 1 Billion instructions with an IPC of 3 – but looking into IPC in isolation doesn’t tell you how much work was actually done. Additionally, you’d want to use your application level metrics that give you an indication of how much application work got done (I used Oracle’s buffer gets and rows processed metrics for this).

Looks like there’s at least 2 more parts left in this series (advanced metrics and a summary), but let’s see how it goes. Sorry for any typos, it’s getting quite late and I’ll fix ’em some other day :)

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Learn About Hyperion & Oracle BI... 5 Minutes at a Time

Look Smarter Than You Are - Fri, 2015-11-27 14:13
Since early 2015, we've been trying to figure out how to help educate more people around the world on Oracle BI and Oracle EPM. Back in 2006, interRel launched a webcast series that started out once every two weeks and then rapidly progressed to 2-3 times per week. We presented over 125 webcasts last year to 5,000+ people from our customers, prospective customers, Oracle employees, and our competitors.

In 2007, we launched our first book and in the last 8 years, we've released over 10 books on Essbase, Planning, Smart View, Essbase Studio, and more. (We even wrote a few books we didn't get to publish on Financial Reporting and the dearly departed Web Analysis.) In 2009, we started doing free day-long, multi-track conferences across North America and participating in OTN tours around the world. We've also been trying to speak at as many user groups and conferences as we can possibly fit in. Side note, if you haven't signed up for Kscope16 yet, it's the greatest conference ever: go to kscope16.com and register (make sure you use code IRC at registration to take $100 off each person's costs).

We've been trying to innovate our education offerings since then to make sure there were as many happy Hyperion, OBIEE, and Essbase customers around the world as possible. Since we started webcasts, books, and free training days, others have started doing them too which is awesome in that it shares the Oracle Business Analytics message with even more people.

The problem is that the time we have for learning and the way we learn has changed. We can no longer take the time to sit and read an entire book. We can't schedule an hour a week at a specific time to watch an hour webcast when we might only be interested in a few minutes of the content. We can't always take days out of our lives to attend conferences no matter how good they are.  So in June 2015 at Kscope16, we launched the next evolution in training (epm.bi/videos):


#PlayItForward is our attempt to make it easier for people to learn by making it into a series of free videos.  Each one focuses on a single topic. Here's one I did that attempts to explain What Is Big Data? in under 12 minutes:

As you can see from the video, the goal is to teach you a specific topic with marketing kept to an absolute minimum (notice that there's not a single slide in there explaining what interRel is). We figure if we remove the marketing, people will not only be more likely to watch the videos but share them as well (competitors: please feel free to watch, learn, and share too). We wanted to get to the point and not teach multiple things in each video.

Various people from interRel have recorded videos in several different categories including What's New (new features in the new versions of various products), What Is? (introductions to various products), Tips & Tricks, deep-dive series (topics that take a few videos to cover completely), random things we think are interesting, and my personal pet project, the Essbase Technical Reference.
Essbase Technical Reference on VideoYes, I'm trying to convert the Essbase Technical Reference into current, easy-to-use videos. This is a labor of love (there are hundreds of videos to be made on just Essbase calc functions alone) and I needed to start somewhere. For the most part, I'm focusing on Essbase Calc Script functions and commands first, because that's where I get the most questions (and where some of the examples in the TechRef are especially horrendous). I've done a few Essbase.CFG settings that are relevant to calculations and a few others I just find interesting.  I'm not the only one at interRel doing them, because if we waited for me to finish, well, we'd never finish. The good news is that there are lots of people at interRel who learned things and want to pass them on.

I started by doing the big ones (like CALC DIM and AGG) but then decided to tackle a specific function category: the @IS... boolean functions. I have one more of those to go and then I'm not sure what I'm tackling next. For the full ever-increasing list, go to http://bit.ly/EssTechRef, but here's the list as of this posting: 
To see all the videos we have at the moment, go to epm.bi/videos. I'm looking for advice on which TechRef videos I should record next. I'm trying to do a lot more calculation functions and Essbase.CFG settings before I move on to things like MDX functions and MaxL commands, but others may take up that mantle. If you have functions you'd like to see a video on, shoot an email over to epm.bi/videos, click on the discussion tab, and make a suggestion or two. If you like the videos and find them helpful (or you have suggestions on how to make them more helpful), please feel free to comment too.

I think I'm going to go start working on my video on FIXPARALLEL.
Categories: BI & Warehousing

What's new in Forms 12c, part 2

Gerd Volberg - Thu, 2015-11-26 02:19
Let's now look into the Form Builder and what changes we got there.

First we see the facelift in the Open-Dialog. It's now the typical Windows-Dialog, where you have much more flexibility.


New features in the Convert-Dialog:

  • New Feature "Converting to and from XML"
  • Checkbox "Overwrite"
  • Checkbox "Keep window open", if you have to convert many files at once.


Preferences-Dialog, Tab "General":

  • The checkbox "Hide PL/SQL Compile Dialog" is new
  • Web Browser location (Forms 11g: Tab "Runtime")
  • Compiler Output is new

Tab "Subclass": No changes


Tab "Wizards": No changes


 Tab "Runtime":
  • The checkbox "Show URL Parameters" is new
  • Application Server URL is much bigger!
  • The Web Browser location vanished to Tab "General"



Have fun
Gerd

UX Empathy and the Art of Storytelling

Usable Apps - Wed, 2015-11-25 13:14

At this year’s Web Summit in Dublin, Ireland, I had the opportunity to observe thousands of attendees. They came from 135 different countries and represented different generations.

Despite these enormous differences, they came together and communicated.

But how? With all of the hype about how different communication styles are among the Baby Boomers, Gen Xers, Millennials, and Generation Zers, I expected to see lots of small groupings of attendees based on generation. And I thought that session audiences would mimic this, too. But I could not have been more wrong.

How exactly, then, did speakers, panelists, and interviewers keep the attention of attendees in the 50+ crowd, the 40+ crowd, and the 20+ crowd while they sat in the same room?

The answer is far simpler than I could have imagined: Authenticity. They kept their messages simple, specific, honest, and in context of the audience and the medium in which they were delivering them.

 Estee Lalonde in conversation at the Fashion Summit session "Height, shoe size and Instagram followers please?"

Web Summit: Estée Lalonde (@EsteeLalonde) in conversation at the Fashion Summit session "Height, shoe size and Instagram followers please?"

Simplicity in messaging was key across Web Summit sessions: Each session was limited to 20 minutes, no matter whether the stage was occupied by one speaker or a panel of interviewees. For this to be successful, those onstage needed to understand their brands as well as the audience and what they were there to hear.

Attention spans are shortening, so it’s increasingly critical to deliver an honest, authentic, personally engaging story. Runar Reistrup, Depop, said it well at the Web Summit when he said:

 Runar Reistrup in conversation during the Fashion Summit session "A branding lesson from the fashion industry"

Web Summit: Runar Reistrup (@runarreistrup) in conversation during the Fashion Summit session "A branding lesson from the fashion industry"

While lots of research, thought, and hard work goes into designing and building products, today’s brand awareness is built with social media. Users need to understand the story you’re telling but not be overwhelmed by contrived messaging.

People want to connect with stories and learn key messages through those stories. Storytelling is the important challenge of our age. And how we use each social medium to tell a story is equally important. Storytelling across mediums is not a one-size-fits-all experience; each medium deserves a unique messaging style. As Mark Little (@marklittlenews), founder of Storyful, makes a point of saying, "This is the golden age of storytelling.

The Oracle Applications User Experience team recognizes this significance of storytelling and the importance of communicating the personality of our brand. We take time to nurture connections and relationships with those who use our applications, which enables us to empathize with our users in authentic ways.

 Aine Kerr talking about the art of storytelling

Web Summit: Áine Kerr (@AineKerr) talking about the art of storytelling

The Oracle simplified user interface is designed with consideration of our brand and the real people—like you—who use our applications. We want you to be as comfortable using our applications as you are having a conversation in your living room. We build intuitive applications that that are based on real-world stories—yours—and that solve real-world challenges that help make your work easier.

We experiment quite a bit, and we purposefully “think as if there is no box.” (Maria Hatzistefanis, Rodial)

 Maria Hatzistefanis in conversation during the Fashion Summit session "Communication with your customer in the digital age"

Web Summit: Maria Hatzistefanis (@MrsRodial) in conversation during the Fashion Summit session "Communication with your customer in the digital age"

We strive for finding that authentic connection between the simplified user interface design and the user.  We use context and content (words) to help shape and inform what message we promote on each user interface page. We choose the words we use as well as the tone carefully because we recognize the significance of messaging, whether the message is a two-word field label or a tweet.  And we test, modify, and retest our designs with real users before we build applications to ensure that the designs respond to you and your needs.

If you want to take advantage of our design approach and practices, download our simplified user experience design patterns eBook for free and design a user experience that mimics the one we deliver in the simplified user interface. And if you do, please let us know what you think at @usableapps.

Oracle Priority Support Infogram for 25-NOV-2015 1000th posting!

Oracle Infogram - Wed, 2015-11-25 11:44

This marks the 1000th post to the Infogram. I am awarding myself a low-carb lollipop.

Data Warehouse


Oracle VM

Oracle VM Performance and Tuning - Part 4, from Virtually All The Time.

Fusion

Changing Appearances: Give The Apps Your Corporate Look, from Fusion Applications Developer Relations.

SmartScan


DRM

Patch Set Update: Oracle Data Relationship Management 11.1.2.4.321, from Business Analytics - Proactive Support.

ACM

Oracle and Adaptive Case Management: Part 1 , from SOA & BPM Partner Community Blog.

NetBeans


EBS

From the Oracle E-Business Suite Support blog:


From the Oracle E-Business Suite Technology blog:



Why Data Virtualization Is so Vital

Kubilay Çilkara - Tue, 2015-11-24 16:35
In today’s day and age, it probably seems like every week you hear about a new form of software you absolutely have to have. However, as you’re about to find out, data virtualization is actually worth the hype.

The Old Ways of Doing Things

Traditionally, data management has been a cumbersome process, to say the least. Usually, it means data replication, data management or using intermediary connectors and servers to pull off point-to-point integration. Of course, in some situations, it’s a combination of the three.

Like we just said, though, these methods were never really ideal. Instead, they were just the only options given the complete lack of alternatives available. That’s the main reason you’re seeing these methods less and less. The moment something better came along, companies jumped all over them.
However, their diminishing utility can also be traced to three main factors. These would be:

  • ·      High costs related to data movement
  • ·      The astronomical growth in data (also referred to as Big  Data)
  • ·      Customers that expect real-time information
These three elements are probably fairly self-explanatory, but that last one is especially interesting to elaborate on. Customers these days really don’t understand why they can’t get all the information they want exactly when they want it. How could they possibly make sense of that when they can go online and get their hands on practically any data they could ever want thanks to the likes of Google? If you’re trying to explain to them that your company can’t do this, they’re most likely going to have a hard time believing you. Worse, they may believe you, but assume that this is a problem relative to your company and that some competitor won’t have this issue.

Introducing Data Virtualization

It was only a matter of time before this problem was eventually addressed. Obviously, when so many companies are struggling with this kind of challenge, there’s quite the incentive for another one to solve it.

That’s where data virtualization comes into play. Companies that are struggling with having critical information spread out across their entire enterprise in all kinds of formats and locations never have to worry about the hardships of having to get their hands on it. Instead, they can use virtualization platforms to search out what they need.

Flexible Searches for Better Results

It wouldn’t make much sense for this type of software to not have a certain amount of agility built in. After all, that’s sort of its main selling point. The whole reason companies invest in it is because it doesn’t get held back by issues with layout or formatting. Whatever you need, it can find.

Still, for best results, many now offer a single interface that can be used to separate and extract aggregates of data in all kinds of ways. The end result is a flexible search which can be leverage toward all kinds of ends. It’s no longer about being able to find any type of information you need, but finding it in the most efficient and productive way possible.

Keep Your Mainframe

One misconception that some companies have about data virtualization is that it will need certain adjustments to be made to your mainframe before it can truly be effective. This makes sense because, for many platforms, this is definitely the case. These are earlier versions, though, and some that just aren’t of the highest quality.

With really good versions, though, you can basically transform your company’s mainframe into a virtualization platform. Such an approach isn’t just cost-effective. It also makes sure you aren’t wasting resources, including time, addressing the shortcomings of your current mainframe, something no company wants to do.

Don’t get turned off from taking a virtualization approach to your cache of data because you’re imagining a long list of chores that will be necessary for transforming your mainframe. Instead, just be sure you invest in a high-end version that will actually transform your current version into something much better.

A Better Approach to Your Current Mainframe

Let’s look at some further benefits that come from taking this approach. First, if the program you choose comes with the use of a high-performance server, you’ll immediately eliminate the redundancy of integrating from point-to-point. This will definitely give you better performance in terms of manageability. Plus, if you ever want to scale up, this will make it much easier to do so.

Proper data migration is key to a good virtualization process. If it is done right the end user wont have to worry out corrupted data and communication between machines will be crystal clear.
If you divert the data mapping you need to do at processing-intensive level and transformation processes away from the General Purpose Processor of your mainframe to the zIIP specialty engine, you’ll get to dramatically reduce your MIPS capacity usage and, therefore, also reduce your company’s TCO (Total Cost of Ownership).

Lastly, maybe you’d like to exploit of every last piece of value you derive from your mainframe data. If so, good virtualization software will not only make this possible, but do so in a way that will let you dramatically turn all of your non-relational mainframe data virtualization into relational formats that any business analytics or intelligence application can use.

Key Features to Look for in Your Virtualization Platform

If you’re now sold on the idea of investing in a virtualization platform, the next step is getting smart about what to look for. As you can imagine, you won’t have trouble finding a program to buy, but you want to make sure it’s actually going to be worth every penny.

The first would be, simply, the amount of data providers available. You want to be able to address everything from big data to machine data to syslogs, distributed and mainframe. Obviously, this will depend a lot on your current needs, but think about the future too.

Then, there’s the same to think about in terms of data consumers. We’re talking about the cloud, analytics, business intelligence and, of course, the web. Making sure you will be able to stay current for some time is very important. Technology changes quickly and the better your virtualization process is the longer you’ll have before having to upgrade. Look closely at the migration process, and whether or not the provider can utilize your IT team to increase work flow. This will help you company get back on track more quickly and with better results.

Finally, don’t forget to look at costs, especially where scalability is concerned. If you have plans of getting bigger in the future, you don’t want it to take a burdensome investment to do so.
As you can see, virtualization platforms definitely live up to the hype.You just have to be sure you spend your money on the right kind.

Mike Miranda writes about enterprise software and covers products offered by software companies like Rocket software about topics such as Terminal emulation,  Enterprise Mobility and more.
Categories: DBA Blogs

The Times They Are A-Changin'

Floyd Teter - Mon, 2015-11-23 19:36
Come gather 'round people
Wherever you roam
And admit that the waters
Around you have grown
And accept it that soon
You'll be drenched to the bone
If your time to you
Is worth savin'
Then you better start swimmin'
Or you'll sink like a stone
For the times they are a-changin'.

                     -- From Bob Dylan's "The Times They Are A-Changin'"


Spent some time with more really smart folks at Oracle last week.  Yeah, this people are really smart...I'm still wondering how they let me in the door.

During that time, I probably had three separate conversations with different people on how SaaS changes the consulting model.  Frankly, implementation is no longer a big money maker in the SaaS game.  The combination of reducing the technology overhead, reducing customizations, and a sharp focus on customer success is killing the IT consulting goose that lays the golden eggs:  application implementation.  You can see indications of it just in the average cycle times between subscription and go-live:  they're down to about 4.5 months and still on a down trend.  Bringing SaaS customers up in less than 30 days is something Oracle can see on the near horizon.  Unfortunately, as the cycle times for SaaS implementations shortens, it gets more difficult for an implementation partner to generate significant revenues and margins.  The entire model is built around 12-t0-24 month implementations - SaaS make those time frames a thing of the past.

So, if I were a SaaS implementation partner today, what would I do?  Frankly, I'd be switching to a relationship - retainer approach with my customers (not my idea...all those smart people I mentioned suggested it).  I'd dedicate teams that would implement SaaS, extend SaaS functionality, test new upgrades prior to rollout, and maintain your SaaS apps.  I'd build a relationship with those customers rather than simply attempt to sell implementation services.  The value to customers?  Your workforce focuses on the business rather than the software.  You need new reports or business intelligence?  Covered in our agreement.  Test this new release before we upgrade our production instance?  Covered in our agreement.  Some new fields on a user page or an add-on business process?  Covered in our agreement.  Something not working?  Let my team deal with Oracle Support...covered in our agreement.

Other ideas?  The comments await.

The times they are a-changin'...quickly.  Better start swimmin'.


Pages

Subscribe to Oracle FAQ aggregator