Feed aggregator

Oracle Cloud – Oracle Management Cloud

Marco Gralike - Thu, 2016-03-10 09:09
Oracle Management Cloud is a new Oracle Cloud offering launched during Oracle Open World 2015,…

Getting Started with MapR Streams

Tugdual Grall - Thu, 2016-03-10 04:18
Read this article on my new blog You can find a new tutorial that explains how to deploy an Apache Kafka application to MapR Streams, the tutorial is available here: Getting Started with MapR Streams MapR Streams is a new distributed messaging system for streaming event data at scale, and it’s integrated into the MapR converged platform. MapR Streams uses the Apache Kafka API, so Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Use OBIEE to Achieve Your GOOOALS!!! – A Presentation for GaOUG

Rittman Mead Consulting - Thu, 2016-03-10 04:00


A few months before the start of the 2014 World Cup, Jon Mead, Rittman Mead’s CEO, asked me to come up with a way to showcase our strengths and skills while leveraging the excitement generated by the World Cup. With this in mind, my colleague Pete Tamisin and I decided to create our own game-tracking page for World Cup matches, similar to the ones you see on popular sports websites like ESPN and CBSSports, with one caveat: we would build the game-tracker inside an OBIEE dashboard.

Unfortunately, after several long nights and weekends, we weren’t able to come up with something we were satisfied with, but we learned tons along the way and kept a lot of the content we created for future use. That future use came several months later when we decided to create our own soccer match (“The Rittman Mead Cup”) and build a game-tracking dashboard that would support this match. We then had the pleasure to present our work in a few industry conferences, like the BI Forum in Atlanta and KScope in Hollywood, Florida.

GaOUG Tech Day

Recently I had the privilege of delivering that presentation one last time, at Georgia Oracle Users Group’s Tech Day 2016. With the right amount of silliness (yes, The Rittman Mead cup was played/acted by our own employees), this presentation allowed us to discuss with the audience our approach to designing a “sticky” application; meaning, an application that users and consumers will not only find useful, but also enjoyable, increasing the chances they will return to and use the application.

We live in an era where nice, fun, pretty applications are commonplace, and our audience expects the same from their business applications. Validating the numbers on the dashboard is no longer enough. We need to be able to present that data in an attractive, intuitive, and captivating way. So, throughout the presentation, I discussed with the audience the thoughtful approach we used when designing our game-tracking page. We focused mainly on the following topics: Serving Our Consumers; Making Life Easier for Our Designers, Modelers, and Analysts; and Promoting Process and Collaboration (the latter can be accomplished with our ChitChat application). Our job would have been a lot easier if ChitChat were available when we first put this presentation together….

Finally, you can find the slides for the presentation here. Please add your comments and questions below. There are usually multiple ways of accomplishing the same thing, so I’d be grateful to hear how you guys are creating “stickiness” with your users in your organizations.

Until the next time.

The post Use OBIEE to Achieve Your GOOOALS!!! – A Presentation for GaOUG appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Slides and demo script from my ORDS talk at apex.world 2016 in Rotterdam

Dietmar Aust - Thu, 2016-03-10 01:23
Hi everybody,

I just came back from the apex.world conference in Rotterdam ... very nice location, great content and the wonderful APEX community to hang out with ... always a pleasure.

As promised, you can download the slides and the demo script (as is) from my site.

Instructions are included.

See you at APEX Connect in Berlin or KScope in Chicago, #letswreckthistogether .

Cheers and enyoy!

Application Testing: The Oracle Utilities Difference

Anthony Shorten - Wed, 2016-03-09 19:33

Late last year we introduced a new product to the Oracle Utilities product set. It was the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities. This pack is a set of prebuilt content and utilities based upon Oracle Application Testing Suite.

One of the major challenges in any implementation, or upgrade, is the amount of time that testing takes in relation to the overall time to go live. Typically testing is on the critical path for most implementations and upgrades. Subsequently, customers have asked us to help address this for our products.

Typically one technique to reduce testing time is to implement automated testing as much as possible. The feedback we got from most implementations was that the adoption of automated testing tools initially was quite high as you needed to build and maintain the assets for the automated testing to be cost effective. This typically requires specialist skills in the testing tool.

This also brought up another issue with traditional automated testing techniques. Most traditional based automated testing tools use the user interface to record their automation scripts. Let me explain. Typically using traditional methods, the tool will "record" your interactions with the online system including the data you used. This is then built into a testing "script" to reproduce the interactions to automated them. This is limiting in that to use the same script with another set of data, for alternative sceanrios, you have to get a script developer to get involved and this requires additional skills. This is akin to programming.

Now let me explain the difference with Oracle Application Testing Suite in combination with the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities:

  • Prebuilt Testing Assets - We provide a set of prebuilt component based assets that the product developers use to QA the product. These greatly reduce the need for building assets from scratch and get you testing earlier.
  • One pack, multiple products, multiple versions - The pack contains the components for the Oracle Utilities products supported and the versions supported.
  • Service based not UI based - The components in the pack are service based rather than using the UI approach traditionally used. This is to isolate your functionality from any user experience changes. In a traditional approach, any changes to the User Interface would require either to re-record the script or making programming changes to the script. This is not needed for the service based approach.
  • Supports Online, Web Services and Batch - Traditional approaches typically would cover online testing only. Oracle Application Testing Suite and the pack allows for online, web services and batch testing as well which greatly expands the benefits.
  • Component Generator Utility - Whilst the pack supplies the components you will need, we are aware the fact that some implementations are heavily customized so we provide a Component Generator which uses the product meta data to generate a custom component that can be added to the existing library.
  • Assemble not code - We use the Oracle Flow Builder product, used by many Oracle eBusiness Suite customers, to assemble the components into a flow that models your business processes. Oracle Flow Builder simply generates the script that is executed with the need for technical script development.
  • Upgrade easier - The upgrade process is much simpler with the flows simply pointed to the new version of the components supplied to perform your upgrade testing.
  • Can Co-exist with UI based Components - Whilst our solution is primarily service based, it is possible to use all the facilities in Oracle Application Testing Suite to build components, including traditional recording, to add any logic introduced on the browser client. The base product does not introduce business logic into the user interface so the base components are not user interface based. We do supply a number of UI based components in the Oracle Utilities Application Framework part of the pack to illustrate that UI based components can co-exist.
  • Cross product testing - It is possible to test across Oracle Utilities products within a single flow. As the license includes the relevant Oracle Application Testing Suite tools (Flow Builder, OpenScript etc) it is possible to add components for bespoke and other solutions, that are web or service based, in your implementation as well.
  • Flexible licensing - The licensing of the testing solution is very flexible. You not only get the pack and the Oracle Application Testing Suite but the license allows the following:
    • The license is regardless of the number of Oracle Utilities products you use. Obviously customers with more than one Oracle Utilities product we see a greater benefit but it is cost effective regardless.
    • The license is regardless of the number of copies of products you run the testing against. There is a server enablement that needs to be performed as part of the installation but you are not restricted to non-production copies you run the solution against.
    • The license conditions include full use of the Oracle Application Testing Suite for licensed users. This can be used against any web or Web Service based application on the site so that you can include third party integration as part of your flows if necessary.
    • The license conditions include OpenScript which allows technical people to build and maintain their own custom assets to add to the component libraries to perform a wide range of ancillary testing.
  • Data is separated from process - In the traditional approach you included the data as part of the test. Using this solution, the flow is built independent of the data. The data, in the form of databanks (CSV, MS Excel etc) can be attached at the completion of the flow, in the flow definition or altered AFTER the flow has been built. Even after the script has been built, Oracle Flow Builder separates the data from the flow so that you can substitute the data without the need to regenerate the script. This means you have greater reuse and greater flexibility in your testing.
  • Flexible execution of Testing - The Flow Builder product generates a script (that typically needs no alteration after generation). This script can be executed in OpenScript (for developers), using the optional Oracle Test Manager product, loaded into the optional Oracle Load Testing product for performance/load testing or executed by a third party tool via a command line interface. This flexibility means greater reuse of your testing assets. 
Support for Extensions

One of the most common questions I get about the pack is the support for customization (or extensions as we call them). Let me step back before answer and put extensions into categories.

When I discuss extending our product there is a full range of facilities available. To focus on the impact of extensions I am going to categorize these into three simple categories:

  • User Interface extensions - These are bits of code in CSS or Java script that extend the user interface directly or add business logic into the browser front end. These are NOT covered by the base components as the product has all the business logic in the services layer. The reason for this is that the same business rules can be reused regardless of the channel used (such as online, web services and batch). If you have it in just one channel then you miss those business rules elsewhere. To support these you can use the features of Oracle Application Testing Suite to record that logic and generate a component for you. You can then include that component in any flow, with other relevant components, to test that logic.
  • Tier 1 extensions - These are extensions that alter the structure of the underlying object. Anything that changes the API to the object are what I am talking about. Extension types such as custom schemas which alter the structure of the object (e.g. flattening data, changing tags, adding rules in the schema etc). These will require the use of the Component Generator as the API will be different than the base component.
  • Tier 2 extensions - These are extensions within the objects themselves that alter behavior. For example, algorithms, user exits, change handlers etc are example of such extensions. These are supported by the base components directly as they alter the base data not the structure. If you have a combination of Tier 1 and Tier 2 then you must use the Component Generator as the structure is altered.

Customers will use a combination of all three and in some cases will need to use the component generators (the UI one or the meta data one) but generally the components supplied will be reused for at least part of the testing, which saves time.

We are excited about this new product and we look forward to adding more technology and new features over the next few releases.

Happy 10th Belated Birthday to My Oracle Security Blog

Pete Finnigan - Wed, 2016-03-09 13:05

Make a Sad Face..:-( I seemed to have missed my blogs tenth which happened on the 20th September 2014. My last post last year and until very recently was on July 23rd 2014; so actually its been a big gap....[Read More]

Posted by Pete On 03/07/15 At 11:28 AM

Categories: Security Blogs

Oracle Database Vault 12c Paper by Pete Finnigan

Pete Finnigan - Wed, 2016-03-09 13:05

I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]

Posted by Pete On 30/06/15 At 05:38 PM

Categories: Security Blogs

Oracle Cloud – Moving a dumpfile into the Database as a Service Cloud

Marco Gralike - Wed, 2016-03-09 08:26
Now that I got myself a bit acquainted with the Database as a Service offering,…

I’m Sasank Vemana and this is how I work

Duncan Davies - Wed, 2016-03-09 08:00

The next profile in our ‘How I Work‘ series is Sasank Vemana. Sasank burst onto the PeopleSoft blogging scene in 2014 with his Sasank’s PeopleSoft Log site, and has been adding entries at a ferocious pace since. He is probably best known for his series of posts on altering the PeopleSoft branding to make it match a corporate palette, as well as configuration and code changes related to UI/UX.

I met Sasank at OOW15 and he’s a lovely chap. He has given some great responses to the questions. I’d love to know how he persuaded his employer to give him 4 monitors and about his use of dual mice!

Name: Sasank Vemana

Occupation: PeopleSoft/Enterprise Technology
Location: Tallahassee, Florida, USA
Current computer:
Desktop: Dell Optiplex 9020 (Windows 7, Intel Core i7, 8 GB RAM)
Laptop: Dell LATITUDE | E6530 (Windows 7, Intel Core i7, 8 GB RAM)
Current mobile devices: Samsung Galaxy S4. Yes – That reminds me I need an upgrade!
I work: To solve problems.

What apps/software/tools can’t you live without?
Google is my friend and my portal to everything. I try not to overload myself with information, which I know I can find. Google search helps me find what I am looking for. On a side note, I use Whatsapp and FaceBook to keep in touch with my family and friends who are scattered in different parts of the world. I also use S Health app to keep track of my physical activities and monitor my health.

Besides your phone and computer, what gadget can’t you live without?
Not a big gadget fan! I can live without them as long as I have a good internet connection, which seems to be the most important thing for me these days. With that, I can do my reading, research and also remote to any of my computers (if needed) regardless of the device. Same goes with entertainment – Netflix, Spotify, etc.

What’s your workspace like?
Over the past year and a half, I have been using a standing desk at work, thanks to my current employers who were kind enough to allow me to rearrange my workspace. When I am at work and not in meetings, I try to stand as much as possible and use a bar stool when I tend to get tired. Occasionally, I also just sit down with my laptop wherever I find space. The four monitor desktop setup helps tremendously when I have multiple applications running. I also have two mice and try to switch between my left and right hand. I am ambidextrous so it works for me (I will not recommend this otherwise!).

Sasank Vemana - How I Work - Picture 1

Standing desk, 4 monitors and dual mice

Sasank Vemana - How I Work - Picture 2

What do you listen to while you work?
Usually, I am zoned into whatever I am doing and mostly oblivious to events around me. I don’t listen to music while I am at work these days. At times, I listen to live cricket or tennis commentary if anything I care about is going on. A set of Bose noise canceling headphones has long been on my wish list (in case Santa is reading!).

What PeopleSoft-related productivity apps do you use?
Oracle Virtual Box/PUM Images – My savior for evaluation, experimentation and proof of concept purposes.
Web Services: SoapUI, Postman (Chrome Add-On)
Web Development: Browser based Developer Tools (Chrome/Firefox/IE), DOM/StyleSheets/JavaScripts Explorers, Device Emulators, etc., Fiddler, Live HTTP Headers (Firefox Add-On)
Text Editors/Journals: Notepad++, Programmer’s File Editor (PFE), WinMerge, Evernote
DB Tools: Golden (for the most part since it is light weight and does not hog resources), SQL Developer (for some activities), OEM – Oracle Enterprise Manager
Screen Capture/Recording: SnagIt and Jing (short videos) are great for communication

Do you have a 2-line tip that some others might not know?
Tracing tip: Use PeopleCode – 2048 (Show Each), SQL – 3 (Statement, Bind). This gives us every line of code and SQL that executed in sequence without all the other clutter which is not always useful especially when we are just trying to understand the logic.

What SQL/Code do you find yourself writing most often?
Generally speaking, queries on PeopleTools metadata tables. E.g.: PSAUTHITEM (security related queries), PSPRSMDEFN (portal navigation queries), etc.

What would be the one item you’d add to PeopleSoft if you could?
I would add/implement a log aggregation and mining utility. I have spent many hours combing through log files distributed across different servers. It would be great to see something that aggregates all server logs and provides mining capabilities (regex and/or free-form search). After attending Oracle OpenWorld 2015, I understand that PeopleTools 8.55 has some new features – as part of Health Center – that might assist with logs. I look forward to evaluating this functionality!

What everyday thing are you better at than anyone else?
Probably exploring! Although, I would be careful not to say that I am better at it than others. I just find myself doing that a lot without worrying about getting lost. It might seem like a wasteful effort at times but it is a natural way of learning for me.

What’s the best advice you’ve ever received?
These are not really advice received from someone but some of my favorite quotes that I can think of right now:
– Learn to profit from your losses.
– Don’t make decisions during a storm.
– A manager gets work done through people whereas a leader inspires people to meet shared goals.
– And miles to go before I sleep.

Webcast Q&A: Marketing Asset Management Integrated with Marketing Cloud

WebCenter Team - Wed, 2016-03-09 07:10

WEBCAST Marketing Asset Management Integrated with Marketing Cloud

Thank you to everyone who joined us last Wednesday on our live webcast: Marketing Asset Management Integrated with Marketing cloud; we appreciate your interest and the great Q&A that followed! For those who missed it or who would like to watch it again, we now have the on-demand replay available for the webcast here.

Mariam Tariq

On the webcast, Mariam Tariq, Senior Director of Product Management -- Content and Process at Oracle discussed how organizations are struggling with managing marketing assets across multiple digital channels where content on each channel (web, email, Facebook page, etc.) is created and delivered by different teams of marketers using different technologies. Mariam gave specific examples and a great demonstration to show audience members how you can enable IT to empower Line of Business by putting the power to create rich microsites in their hands -- driving business agility and innovation.

We also wanted to capture some of the most asked questions and Mariam’s responses here. But please do follow up with us and comment here if you have additional questions or feedback. We always look forward to hearing from you.

Q: Rather than pushing assets directly into other system or even using the microsite portal to share them, can or marketing team simply share links directly from Document Cloud and not use Process or Sites Cloud?

Mariam: Absolutely. If you need only the collaboration platform, you can use Documents on its own. Keep in mind that Documents Cloud includes limited use of Sites Cloud for test use cases. So you can try out Sites Cloud with Documents. 

Q: Since you didn’t show Oracle SRM in the demo, can you explain how that integration works?

Mariam: There is a release coming this spring with Documents integration into Oracle SRM. Oracle SRM to quickly summarize enables social marketers to create and manage social feeds. This includes a layout editor to create Facebook pages and ability to schedule and publish updates like twitter posts. In SRM, you will see a button allowing you to directly access Documents Cloud content like an image. The file will get copied into SRM and you can then use that file in your social messages.

Q: Can the approvers get email about tasks?

Mariam: Most definitely. Approvers get an email about the task with a link to take them directly into Process Cloud to review the files and do the approval.

Q: How does pricing work?

Mariam: Documents and Process are user based pricing starting. Sites Cloud is priced on a metric called ‘interactions’ which is a measure for data consumption, so essentially priced by the amount of data delivered across all your microsites.  More detail is available at cloud.oracle.com.

Q: With the website tool you showed, can we restrict who has access to the site?

Mariam: Yes. Absolutely. You can secure access to the site. 

Q: Are the conversation features you showed in Document Cloud related to the Oracle Social Network offering… it looks similar?

Mariam: Yes. We have officially rolled Oracle Social Network (OSN) into Documents Cloud. Since the social collaboration process often involves sharing and discussing documents like Word and PowerPoint files, the feedback we received from our customers was to simply merge the two rather than having a separate service for each. So now, the OSN features are part of Documents. You can directly access the comments and discussions in context of the organized files and folders of Documents Cloud rather than referencing documents in a separate interface.

Q: Where are the assets stored? Isn't WebCenter Sites from Oracle a content repository?

Mariam: WebCenter Sites is definitely used as a content management system. It's on-premise. Documents Cloud is a multi-tenant cloud solution. You can use them both together. What Documents Cloud provides you is a flexible cloud-based collaboration platform that you can also use with external agencies. Those assets can be consumed within WebCenter Sites. This is a 'hybrid' cloud to on-premise setup. Now in WebCenter Sites you can reference the cloud assets directly from Documents. A subset of your assets could be managed this way. It simply provides a more nimble cloud collaboration extension to complement WebCenter Sites (or any WCM system).

Q: Is it "all push" to customers/clients or is there a feedback loop from customers/clients to track the success of the campaign?

Mariam: In the demo, we're simply pushing the content out cross-channel. We have analytics coming later in this year that will show consumption of the content that will help with the feedback loop to measure engagement.

In case you missed the webcast and would like to listen to the on demand version, you can do so here.

UKOUG Application Server & Middleware SIG

Tim Hall - Wed, 2016-03-09 05:50

ukougI’ll be speaking at the UKOUG Application Server & Middleware SIG tomorrow.

It’s going to be another hit-and-run affair for me. I’m in meetings at work all morning, then I’ll be doing a mad dash to get to my presentation at the SIG, then straight back to work to do an upgrade during the evening.

The agenda looks cool, so I would have liked to stay the whole day, but sadly that’s not going to happen. :(

My favourite bit of any tech event is interacting with people, so just turning up to present is not ideal, but in this case I don’t have a choice in the matter, unless I go AWOL from work… :)

Hope to see you there, even if it is only briefly!



UKOUG Application Server & Middleware SIG was first posted on March 9, 2016 at 12:50 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Free Oracle Database Monitoring Webinar by Oracle ACE Ronald Rood

Gerger Consulting - Wed, 2016-03-09 05:19
Attend our webinar and learn how you can monitor your Oracle Database and cloud infrastructure with Zabbix, the open source monitoring tool.

The presentation is hosted by Oracle ACE and Certified Master Ronald Rood.

Learn more about the webinar at this link.

Categories: Development

OSB 12c Adapter for Oracle Utilities

Anthony Shorten - Tue, 2016-03-08 23:32

In Oracle Utilities Application Framework V4. we introduced  Oracle Service Bus adapters to allow that product to process Outbound Messages and for Oracle Utilities Customer Care And Billing, Notification and Workflow records.

These adapters were compatible with Oracle Service Bus 11g. We have not patched these adapters to be compatible with new facilities in Oracle Service Bus 12c. The following patches must be applied:

 Version  Patch Number  22308653  21760629  22308684  

Debugging Kibana using Chrome developer tools

Pythian Group - Tue, 2016-03-08 17:53

Amazon Elasticsearch Service is a managed service to implement Elasticsearch in AWS. Underlying instances are managed by AWS and interaction with the service is available through API and AWS GUI.

Kibana is also integrated with Amazon Elasticsearch Service. We came across an issue which caused Kibana4 to show the following error message, when searching for *.

Courier Fetch: 10 of 60 shards failed.

Error is not very descriptive.

As Amazon Elasticsearch service is an endpoint only and we do not have direct access to the instances. We also have access to few API tools.

We decided to see what can be found from the chrome browser.

The Chrome Developer Tools (DevTools) contains lots of useful debugging possibilities.

DevTools can be started using several methods.

1. Right click and click Inspect.
2. From Menu -> More Tools -> Developer Tools
3. Press F12

Network tab under DevTools can be used to debug wide variety of issues. It records every requests made when a web page is loading. It captures wide range of information about every request like HTTP access Method, status and time took to complete the request etc.

By clicking on any of the requested resource, we will be able to get more information on the request.

In this case, the interesting bit was under the Preview tab. The Preview tab captures the data chrome got back from the search and store it as objects.

A successful query would look like the image below captured from Kibana3 of public website logstash.openstack.org.


We checked “_msearch?timeout=3000..” and received following errors messages under the nested values (For example “responses” -> “0” -> “_shards” -> “failures” -> “0”)

{index: “logstash-2016.02.24”, shard: 1, status: 500,…}index: “logstash-2016.02.24″reason: “RemoteTransportException[[Leech][inet[/]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; “shard: 1status: 500

So the issue is clear, fielddata usage is above the limit.

As per Amazon documentation,

Field Data Breaker –
Percentage of JVM heap memory allowed to load a single data field into memory. The default value is 60%. We recommend raising this limit if you are uploading data with large fields.
For more information, see Field data in the Elasticsearch documentation.

Following url documents the supported Amazon Elasticsearch operations.


On checking the current heap usage (second column) of the data nodes, we can see that heap usage is very high,

$ curl -XGET “http://elasticsearch.abc.com/_cat/nodes?v”
host ip heap.percent ram.percent load node.role master name
x.x.x.x   10   85   0.00   –   m   Drax the Destroyer
x.x.x.x   7   85   0.00   –   *   H.E.R.B.I.E.
x.x.x.x   78   64   1.08   d   –   Black Cat
x.x.x.x   80   62   1.41   d   – Leech
x.x.x.x   7   85   0.00   –   m   Alex
x.x.x.x   78   63   0.27   d   –   Saint Anna
x.x.x.x   80   63   0.28   d   –   Martinex
x.x.x.x   78   63   0.59   d   –   Scorpio

Following command can be used to increase the indices.breaker.fielddata.limit value. This can be used as a workaround.

$ curl -XPUT elasticsearch.abc.com/_cluster/settings -d ‘{ “persistent” : { “indices.breaker.fielddata.limit” : “89%” } }’

Running the command allowed the kibana search to run without issues and show the data.

The real solution would be to increase the number of nodes or reduce the amount of field data that need to be loaded by limiting number of indexes.

AWS Lamda can be used to to run a script to cleanup indices as a scheduled event.

Categories: DBA Blogs

Partner Webcast – Oracle PaaS: Application Container Cloud Service

Oracle Application Container Cloud Service, a new Oracle Cloud Platform (PaaS) offering, leverages Docker containers and provides a lightweight infrastructure so that you can run Java SE and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Is SLOB AWR Generation Really, Really, Really Slow on Oracle Database Yes, Unless…

Kevin Closson - Tue, 2016-03-08 16:36

If you are testing SLOB against and find that the AWR report generation phase of runit.sh is taking an inordinate amount of time (e.g., more than 10 seconds) then please be aware that, in the SLOB/awr subdirectory, there is a remedy script rightly called 11204-awr-stall-fix.sql.

Simply execute this script when connected to the instance with sysdba privilege and the problem will be solved.


Filed under: oracle Tagged:, Automatic Workload Repository, AWR, SLOB, SLOB Testing

Live to Win – Motorhead Covers and Pythonic Irrigation

The Anti-Kyte - Tue, 2016-03-08 15:31

The recent passing of Lemmy has caused me to reflect on on the career of one of the bands who made my growing up (and grown-up) years that much…well…louder.

Yes, I know that serious Python documentation should employ a sprinkling of Monty Python references but, let’s face it, what follows is more of a quick trawl through some basic Python constructs that I’ve found quite useful recently.
If I put them all here, at least I’ll know where to look when I need them again.

In any case, Michael Pailin made a guest appearance on the album Rock ‘n’ Roll so that’s probably enough of a link to safisfy the Monty Python criteria.

I find Python a really good language to code in…especially when the alternative is writing a Windows Batch Script. However, there is a “but”.
Python 3 is not backward compatible with Python 2. This can make life rather interesting on occasion.

It is possible to write code that is compatible with both versions of the language and there’s a useful article here on that topic.

The code I’ve written here has been tested on both Python 2 (2.7.6) and Python 3 (3.4.3).

One of the great things about Python is that there are a number of modules supplied as standard, which greatly simplify some common programming tasks.
What I’m going to run through here is :

  • Getting information about the environment
  • Handling runtime arguments with the argparse module
  • Reading config files with configparser
  • Writing information to log files with the logging module

Existential Questions

There are a number of questions that you’ll want to answer programatically, sooner rather than later…

Who am I

There’s a couple of ways to find out the user your connected as from inside Python.
You could simply use the os.getlogin() function…

import os
print( os.getlogin())

…but according to the official documentation [link] this is probably a better option…

import os
import pwd
print( pwd.getpwuid(os.getuid())[0])

Additionally, we may want to know the name of the Python program we’re currently in. The following script – called road_crew.py – should do the job :

import os

Running this we get :

Where am I

Step forward the platform module, as seen here in this code (saved as hammersmith.py) :

import platform

def main() :
    # Get the name of the host machine
    machine_name = platform.node()
    # Get the OS and architecture
    os_type = platform.system()
    if platform.machine() == 'x86_64' :
        os_arch = '64-bit'
    else :
        os_arch = '32-bit'

    print('Running on '+machine_name+' which is running '+ os_type + ' ' + os_arch)

    # Now get more detailed OS information using the appropriate function...
    if os_type == 'Linux' :
    elif os_type == 'Windows' :
    elif os_type == 'Mac' :
        #NOTE - I don't have a Mac handy so have no way of testing this statement
    else :
        print("Sky high and 6000 miles away!")

if __name__ == '__main__' :

Running this on my Linux Mint machine produces :

Running on mike-TravelMate-B116-M which is running Linux 64-bit
('LinuxMint', '17.3', 'rosa')

As mentioned previously, you also may be quite keen to know the version of Python that your program is running on….

import sys

major = sys.version_info[0]
minor = sys.version_info[1]
micro = sys.version_info[2]

if major == 3 :
    print('Ace of Spades !')
else :
    print('Bomber !')

print('You are running Python ' + str(major) + '.' + str(minor) + '.' + str(micro))

On Python 3, this outputs…

Ace of Spades !
You are running Python 3.4.3

…whilst on Python 2…

Bomber !
You are running Python 2.7.6
When Am I

As for the current date and time, allow me to introduce another_perfect_day.py…

import time

today = time.strftime("%a %d %B %Y")
now = time.strftime("%H:%M:%S")

print("Today's date is " + today);
print("The time is now " + now);

…which gives us …

Today's date is Sun 06 March 2016
The time is now 19:15:19
Argument parsing

The argparse module makes handling arguments passed to the program fairly straightforward.
It allows you to provide a short or long switch for the argument, specify a default value, and even write some help text.
The program is called no_remorse.py and looks like this :

import argparse

parser = argparse.ArgumentParser()
parser.add_argument("-a", "--age", default = 40, help = "How old are you ? (defaults to 40 - nothing personal)")
args = vars(parser.parse_args())
age = args['age']
if int(age) > 39 :
    print('I remember when Motorhead had a number 1 album !')
else :
    print('Who are Motorhead ?')

The argparse gives us a couple of things. First of all, if we want to know more about the required parameters, we can simply invoke the help :

python no_remorse.py -h
usage: no_remorse.py [-h] [-a AGE]

optional arguments:
  -h, --help         show this help message and exit
  -a AGE, --age AGE  How old are you ? (defaults to 40 - nothing personal)

If we run it without specifying a value for age, it will pick up the default….

python no_remorse.py
I remember when Motorhead had a number 1 album !

…and if I’m tempted to lie about my age (explicitly, as opposed to by omission in the previous example)…

python no_remorse.py -a 39
Who are Motorhead ?

As well as using the single-letter switch for the parameter, we can use the long version …

python no_remorse.py --age 48
I remember when Motorhead had a number 1 album !

One other point to note, the program will not accept arguments passed by positon, either the long or short switch for the argument must be specified. Either that or Python comes with it’s own outrageous lie detector…

python no_remorse.py 25
usage: no_remorse.py [-h] [-a AGE]
no_remorse.py: error: unrecognized arguments: 25
Reading a config file

There are times when you need a program to run on multiple environments, each with slightly different details ( machine name, directory paths etc).
Rather than having to pass these details in each time you run the program, you can dump them all into a file for your program to read at runtime.
Usually, you’ll pass in an argument to point the program at the appropriate section of your config file. A config file will look something like this :

db_name = dev01

db_name = test01

db_name = prod

In this example, your program will probably accept an argument specifying which environment it needs to run against and then read the appropriate section of the config file to set variables to the appropriate values.

My working example is slightly different and is based on cover versions that Motorhead have done of other artists’ tracks, together with a couple of my favourite covers of Motorhead songs by other bands :

Tammy Wynette = Stand By Your Man
The Kingsmen = Louie Louie

Motorhead = Overkill

Motorhead = Motorhead

Now, you could spend a fair amount of time trying to figure out how to read this file and get the appropriate values…or you could just use the configparser module…

Conditional Import – making sure you find Configparser

The configparser module was renamed in Python3 so the import statement for it is different depending on which version of Python your using.
Fortunately, Python offers the ability to conditionally import modules as well as allowing you to alias them.
Therefore, this should solve your problem…

    import configparser
except ImportError :
    import ConfigParser as configparser

So, if we’re running Python 3 the first import statement succeeds.
If we’re running Python2 we’ll get an ImportError, in which case we import the version 2 ConfigParser and alias it as configparser.
The alias means that we can refer to the module in the same way throughout the rest of the program without having to check which version we’ve actually imported.
As a result, our code should now run on either Python version :

    import configparser
except ImportError :
    import ConfigParser as configparser

config = configparser.ConfigParser()

#Get a single value from the [CORDUROY] section of the config file
cover_artist = 'CORDUROY'
#Find the track they covered, originally recorded by Motorhead
# Pass the config section and the original artist ( the entry on the left-hand side of the "="
# in the config file
track = config.get(cover_artist, 'Motorhead')
# cover_artist and track are string objects so we can use the title method to initcap the output
print(cover_artist.title() + ' covered ' + track.title() + ' by Motorhead')

# Loop through all of the entries in the [MOTORHEAD] section of the config file
for original_artist in config.options('MOTORHEAD') :
    print('Motorhead covered ' + config.get('MOTORHEAD', original_artist) + ' by ' + original_artist.upper())

Run this and we get…

Corduroy covered Motorhead by Motorhead
Motorhead covered Stand By Your Man by TAMMY WYNETTE
Motorhead covered Louie Louie by THE KINGSMEN
Dead Men Tell No Tales

…but fortunately the Python logging module will let your programs sing like a canary.

As with the configparser, there’s no need to write lots of code to open and write to a file.
There are five levels of logging message supported :

  • INFO
  • WARNING – the default

There is a separate call to write each message type. The message itself can be formatted to include information such as a timestamp and the program from which the message was written. There’s a detailed how-to on logging here.

For now though, we want a simple program (logger.py) to write messages to a file wittily and originally titled logger.log…

import logging


logging.debug('No Remorse')
logging.info('Overnight Sensation')
logging.warn('March or Die')
logging.error('Bad Magic')

There’s no output to the screen when we run this program but if we check, there should now be a file called logger.log in the same directory which contains :

2016-03-06 19:19:59,375:logger.py:INFO:Overnight Sensation
2016-03-06 19:19:59,375:logger.py:WARNING:March or Die
2016-03-06 19:19:59,375:logger.py:ERROR:Bad Magic

As you can see, the type of message in the log depends on the logging member invoked to write the message.

If you want a more comprehensive/authoritative/coherent explanation of the features I’ve covered here, then have a look at the official Python documentation.
On the other hand, if you want to check out a rather unusual version of one of Motorhead’s signature tracks, this is definitely worth a look.

Filed under: python Tagged: argparse, configparser, logging, os.getlogin, os.path.basename, platform.node, platform.system, pwd.getpwuid, sys.version_info, time.strftime

Wrong Results

Jonathan Lewis - Tue, 2016-03-08 12:57

Just in – a post on the Oracle-L mailing lists asks: “Is it a bug if a query returns one answer if you hint a full tablescan and another if you hint an indexed access path?” And my answer is, I think: “Not necessarily”:

SQL> select /*+ full(pt_range)  */ n2 from pt_range where n1 = 1 and n2 = 1;

SQL> select /*+ index(pt_range pt_i1) */ n2 from pt_range where n1 = 1 and n2 = 1;


The index is NOT corrupt.

The reason why I’m not sure you should call this a bug is that it is a side effect of putting the database into an incorrect state. You might have guessed from the name that the table is a (range) partitioned table, and I’ve managed to get this effect by doing a partition exchange with the “without validation” option.

create table t1 (
        n1      number(4),
        n2      number(4)

insert into t1
select  rownum, rownum
from    all_objects
where   rownum <= 5

create table pt_range (
        n1      number(4),
        n2      number(4)
partition by range(n1) (
        partition p10 values less than (10),
        partition p20 values less than (20)

insert into pt_range
        rownum, rownum
        rownum <= 15
create index pt_i1 on pt_range(n1,n2);

                ownname    => user,
                tabname    => 'T1',
                method_opt => 'for all columns size 1'

                ownname    => user,
                tabname    => 'PT_RANGE',
                method_opt => 'for all columns size 1'

alter table pt_range
exchange partition p20 with table t1
including indexes
without validation
update indexes

The key feature (in this case) is that the query can be answered from the index without reference to the table. When I force a full tablescan Oracle does partition elimination and looks at just one partition; when I force the indexed access path Oracle doesn’t eliminate rows that belong to the wrong partition – though technically it could (because it could identify the target partition by the partition’s data_object_id which is part of the extended rowid stored in global indexes).

Here are the two execution plans (from – notice how the index operation has no partition elimination while the table operation prunes partitions:

select /*+ full(pt_range)  */ n2 from pt_range where n1 = 1 and n2 = 1

| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT       |          |       |       |     2 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|          |     1 |     6 |     2   (0)| 00:00:01 |     1 |     1 |
|*  2 |   TABLE ACCESS FULL    | PT_RANGE |     1 |     6 |     2   (0)| 00:00:01 |     1 |     1 |

Predicate Information (identified by operation id):
   2 - filter(("N1"=1 AND "N2"=1))

select /*+ index(pt_range pt_i1) */ n2 from pt_range where n1 = 1 and n2 = 1

| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT |       |       |       |     1 (100)|          |
|*  1 |  INDEX RANGE SCAN| PT_I1 |     1 |     6 |     1   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   1 - access("N1"=1 AND "N2"=1)

Note: If I had a query that did a table access by (global) index rowid after the index range scan it WOULD do partition elimination and visit just the one partition – never seeing the data in the wrong partition.

So is it a bug ? You told Oracle not to worry about bad data – so how can you complain if it reports bad data.

Harder question – which answer is the “right” one – the answer which shows you all the data matching the query, or the answer which shows you only the data that is in the partition it is supposed to be in ?

Can The Public Cloud Meet the Needs of Your Enterprise Applications?

Pythian Group - Tue, 2016-03-08 12:19


Any applications your company runs on premise can also be run in the public cloud. But does that mean they should be?

While the cloud offers well-documented benefits of flexibility, scalability, and cost efficiency, some applications — and especially business-critical enterprise applications — have specific characteristics that can make them tricky to move into a public cloud environment.

That’s not to say you shouldn’t consider the cloud as an option, but you should be aware of the following enterprise application needs before you make any migration decisions:

1. Highly customized infrastructure

Enterprise applications often rely on software components that are uniquely configured: they may need very specific storage layouts and security settings or tight integration with certain third-party tools. That makes it hard to replace them with generic platform-as-a-service (PaaS) alternatives in the cloud.
The same is true on the infrastructure side: application software components often need particular network configurations and controls that aren’t available from a typical infrastructure-as-a-service (IaaS) offering. (An example would be the way Oracle Real Application Clusters have to allow the cluster software to manipulate network settings, such as controlling IP addresses and network interfaces.)

2. Tightly coupled components

Today’s cloud application architectures are based on “microservices” — collections of services that perform specific tasks. When combined, these answer the whole of the application requirements. With enterprise applications, there are so many interdependencies between the various software components that it can be extremely difficult to change, upgrade, move, or scale an individual component without having a huge impact on the rest of the system.

3. Siloed IT departments

Enterprise applications are usually supported by siloed enterprise IT operations — DBAs, system administrators, storage administrators, network administrators and the like — each with their own responsibilities. Cloud deployment, on the other hand, requires much greater focus on collaboration across the IT environment. This means breaking down traditional silos to create full-stack teams with vertical application ownership. Some teams are likely to resist this change as they could end up with significantly less work and responsibility once the management of application components has shifted to the cloud vendor. So migrating to the cloud isn’t just a technical decision; it has people-process implications, too.

4. Costly infrastructure upgrades

Every company knows upgrading enterprise applications is a major undertaking and can often cause downtime and outages. This is true when the application stays inside your own data center — and doubly so when it moves to a cloud provider due to how long it takes to move massive amounts of data through the Internet and risks associated with unknown issues on the new virtual platform. For these reasons, significant financial commitment is often required to build and maintain an IT team with the right skills to do upgrades quickly and effectively as well as maintain the system.

5. Inflexible licensing models

The components used in enterprise applications are often proprietary products with licensing models that are not compatible with the elasticity of the cloud. For example, many Oracle licenses are for legacy applications and can used only on particular systems. Transferring those licenses to a cloud-based infrastructure is not an easy task.

In addition, perpetual software licenses are often not portable to the typical pay-as-you-go model used by most cloud providers. Plus, most software vendors don’t have any incentive to transition their customers from locked-in perpetual licenses with a steady maintenance revenue stream to a model that allows them to switch to a competitive product at any time.

Even though the nature of enterprise applications makes them difficult to migrate to the cloud, the benefits of doing so — in costs savings, availability, and business agility — still make it a very compelling proposition. In my next blog, I’ll take a look at some of the paths available to you should you decide to move your enterprise applications to the public cloud.

For more on this topic, check out our white paper on Choosing the Right Public Cloud Platform For Your Enterprise Applications Built on Oracle Database.



Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator