Feed aggregator

Happy 10th Belated Birthday to My Oracle Security Blog

Pete Finnigan - Wed, 2016-03-09 13:05

Make a Sad Face..:-( I seemed to have missed my blogs tenth which happened on the 20th September 2014. My last post last year and until very recently was on July 23rd 2014; so actually its been a big gap....[Read More]

Posted by Pete On 03/07/15 At 11:28 AM

Categories: Security Blogs

Oracle Database Vault 12c Paper by Pete Finnigan

Pete Finnigan - Wed, 2016-03-09 13:05

I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]

Posted by Pete On 30/06/15 At 05:38 PM

Categories: Security Blogs

Oracle Cloud – Moving a dumpfile into the Database as a Service Cloud

Marco Gralike - Wed, 2016-03-09 08:26
Now that I got myself a bit acquainted with the Database as a Service offering,…

I’m Sasank Vemana and this is how I work

Duncan Davies - Wed, 2016-03-09 08:00

The next profile in our ‘How I Work‘ series is Sasank Vemana. Sasank burst onto the PeopleSoft blogging scene in 2014 with his Sasank’s PeopleSoft Log site, and has been adding entries at a ferocious pace since. He is probably best known for his series of posts on altering the PeopleSoft branding to make it match a corporate palette, as well as configuration and code changes related to UI/UX.

I met Sasank at OOW15 and he’s a lovely chap. He has given some great responses to the questions. I’d love to know how he persuaded his employer to give him 4 monitors and about his use of dual mice!

Name: Sasank Vemana

Occupation: PeopleSoft/Enterprise Technology
Location: Tallahassee, Florida, USA
Current computer:
Desktop: Dell Optiplex 9020 (Windows 7, Intel Core i7, 8 GB RAM)
Laptop: Dell LATITUDE | E6530 (Windows 7, Intel Core i7, 8 GB RAM)
Current mobile devices: Samsung Galaxy S4. Yes – That reminds me I need an upgrade!
I work: To solve problems.

What apps/software/tools can’t you live without?
Google is my friend and my portal to everything. I try not to overload myself with information, which I know I can find. Google search helps me find what I am looking for. On a side note, I use Whatsapp and FaceBook to keep in touch with my family and friends who are scattered in different parts of the world. I also use S Health app to keep track of my physical activities and monitor my health.

Besides your phone and computer, what gadget can’t you live without?
Not a big gadget fan! I can live without them as long as I have a good internet connection, which seems to be the most important thing for me these days. With that, I can do my reading, research and also remote to any of my computers (if needed) regardless of the device. Same goes with entertainment – Netflix, Spotify, etc.

What’s your workspace like?
Over the past year and a half, I have been using a standing desk at work, thanks to my current employers who were kind enough to allow me to rearrange my workspace. When I am at work and not in meetings, I try to stand as much as possible and use a bar stool when I tend to get tired. Occasionally, I also just sit down with my laptop wherever I find space. The four monitor desktop setup helps tremendously when I have multiple applications running. I also have two mice and try to switch between my left and right hand. I am ambidextrous so it works for me (I will not recommend this otherwise!).

Sasank Vemana - How I Work - Picture 1

Standing desk, 4 monitors and dual mice

Sasank Vemana - How I Work - Picture 2

What do you listen to while you work?
Usually, I am zoned into whatever I am doing and mostly oblivious to events around me. I don’t listen to music while I am at work these days. At times, I listen to live cricket or tennis commentary if anything I care about is going on. A set of Bose noise canceling headphones has long been on my wish list (in case Santa is reading!).

What PeopleSoft-related productivity apps do you use?
Oracle Virtual Box/PUM Images – My savior for evaluation, experimentation and proof of concept purposes.
Web Services: SoapUI, Postman (Chrome Add-On)
Web Development: Browser based Developer Tools (Chrome/Firefox/IE), DOM/StyleSheets/JavaScripts Explorers, Device Emulators, etc., Fiddler, Live HTTP Headers (Firefox Add-On)
Text Editors/Journals: Notepad++, Programmer’s File Editor (PFE), WinMerge, Evernote
DB Tools: Golden (for the most part since it is light weight and does not hog resources), SQL Developer (for some activities), OEM – Oracle Enterprise Manager
Screen Capture/Recording: SnagIt and Jing (short videos) are great for communication

Do you have a 2-line tip that some others might not know?
Tracing tip: Use PeopleCode – 2048 (Show Each), SQL – 3 (Statement, Bind). This gives us every line of code and SQL that executed in sequence without all the other clutter which is not always useful especially when we are just trying to understand the logic.

What SQL/Code do you find yourself writing most often?
Generally speaking, queries on PeopleTools metadata tables. E.g.: PSAUTHITEM (security related queries), PSPRSMDEFN (portal navigation queries), etc.

What would be the one item you’d add to PeopleSoft if you could?
I would add/implement a log aggregation and mining utility. I have spent many hours combing through log files distributed across different servers. It would be great to see something that aggregates all server logs and provides mining capabilities (regex and/or free-form search). After attending Oracle OpenWorld 2015, I understand that PeopleTools 8.55 has some new features – as part of Health Center – that might assist with logs. I look forward to evaluating this functionality!

What everyday thing are you better at than anyone else?
Probably exploring! Although, I would be careful not to say that I am better at it than others. I just find myself doing that a lot without worrying about getting lost. It might seem like a wasteful effort at times but it is a natural way of learning for me.

What’s the best advice you’ve ever received?
These are not really advice received from someone but some of my favorite quotes that I can think of right now:
– Learn to profit from your losses.
– Don’t make decisions during a storm.
– A manager gets work done through people whereas a leader inspires people to meet shared goals.
– And miles to go before I sleep.


Webcast Q&A: Marketing Asset Management Integrated with Marketing Cloud

WebCenter Team - Wed, 2016-03-09 07:10

WEBCAST Marketing Asset Management Integrated with Marketing Cloud

Thank you to everyone who joined us last Wednesday on our live webcast: Marketing Asset Management Integrated with Marketing cloud; we appreciate your interest and the great Q&A that followed! For those who missed it or who would like to watch it again, we now have the on-demand replay available for the webcast here.

Mariam Tariq

On the webcast, Mariam Tariq, Senior Director of Product Management -- Content and Process at Oracle discussed how organizations are struggling with managing marketing assets across multiple digital channels where content on each channel (web, email, Facebook page, etc.) is created and delivered by different teams of marketers using different technologies. Mariam gave specific examples and a great demonstration to show audience members how you can enable IT to empower Line of Business by putting the power to create rich microsites in their hands -- driving business agility and innovation.

We also wanted to capture some of the most asked questions and Mariam’s responses here. But please do follow up with us and comment here if you have additional questions or feedback. We always look forward to hearing from you.

Q: Rather than pushing assets directly into other system or even using the microsite portal to share them, can or marketing team simply share links directly from Document Cloud and not use Process or Sites Cloud?

Mariam: Absolutely. If you need only the collaboration platform, you can use Documents on its own. Keep in mind that Documents Cloud includes limited use of Sites Cloud for test use cases. So you can try out Sites Cloud with Documents. 

Q: Since you didn’t show Oracle SRM in the demo, can you explain how that integration works?

Mariam: There is a release coming this spring with Documents integration into Oracle SRM. Oracle SRM to quickly summarize enables social marketers to create and manage social feeds. This includes a layout editor to create Facebook pages and ability to schedule and publish updates like twitter posts. In SRM, you will see a button allowing you to directly access Documents Cloud content like an image. The file will get copied into SRM and you can then use that file in your social messages.

Q: Can the approvers get email about tasks?

Mariam: Most definitely. Approvers get an email about the task with a link to take them directly into Process Cloud to review the files and do the approval.

Q: How does pricing work?

Mariam: Documents and Process are user based pricing starting. Sites Cloud is priced on a metric called ‘interactions’ which is a measure for data consumption, so essentially priced by the amount of data delivered across all your microsites.  More detail is available at cloud.oracle.com.

Q: With the website tool you showed, can we restrict who has access to the site?

Mariam: Yes. Absolutely. You can secure access to the site. 

Q: Are the conversation features you showed in Document Cloud related to the Oracle Social Network offering… it looks similar?

Mariam: Yes. We have officially rolled Oracle Social Network (OSN) into Documents Cloud. Since the social collaboration process often involves sharing and discussing documents like Word and PowerPoint files, the feedback we received from our customers was to simply merge the two rather than having a separate service for each. So now, the OSN features are part of Documents. You can directly access the comments and discussions in context of the organized files and folders of Documents Cloud rather than referencing documents in a separate interface.

Q: Where are the assets stored? Isn't WebCenter Sites from Oracle a content repository?

Mariam: WebCenter Sites is definitely used as a content management system. It's on-premise. Documents Cloud is a multi-tenant cloud solution. You can use them both together. What Documents Cloud provides you is a flexible cloud-based collaboration platform that you can also use with external agencies. Those assets can be consumed within WebCenter Sites. This is a 'hybrid' cloud to on-premise setup. Now in WebCenter Sites you can reference the cloud assets directly from Documents. A subset of your assets could be managed this way. It simply provides a more nimble cloud collaboration extension to complement WebCenter Sites (or any WCM system).

Q: Is it "all push" to customers/clients or is there a feedback loop from customers/clients to track the success of the campaign?

Mariam: In the demo, we're simply pushing the content out cross-channel. We have analytics coming later in this year that will show consumption of the content that will help with the feedback loop to measure engagement.

In case you missed the webcast and would like to listen to the on demand version, you can do so here.

UKOUG Application Server & Middleware SIG

Tim Hall - Wed, 2016-03-09 05:50

ukougI’ll be speaking at the UKOUG Application Server & Middleware SIG tomorrow.

It’s going to be another hit-and-run affair for me. I’m in meetings at work all morning, then I’ll be doing a mad dash to get to my presentation at the SIG, then straight back to work to do an upgrade during the evening.

The agenda looks cool, so I would have liked to stay the whole day, but sadly that’s not going to happen. :(

My favourite bit of any tech event is interacting with people, so just turning up to present is not ideal, but in this case I don’t have a choice in the matter, unless I go AWOL from work… :)

Hope to see you there, even if it is only briefly!

Cheers

Tim…

UKOUG Application Server & Middleware SIG was first posted on March 9, 2016 at 12:50 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Free Oracle Database Monitoring Webinar by Oracle ACE Ronald Rood

Gerger Consulting - Wed, 2016-03-09 05:19
Attend our webinar and learn how you can monitor your Oracle Database and cloud infrastructure with Zabbix, the open source monitoring tool.

The presentation is hosted by Oracle ACE and Certified Master Ronald Rood.

Learn more about the webinar at this link.


Categories: Development

OSB 12c Adapter for Oracle Utilities

Anthony Shorten - Tue, 2016-03-08 23:32

In Oracle Utilities Application Framework V4.2.0.3.0 we introduced  Oracle Service Bus adapters to allow that product to process Outbound Messages and for Oracle Utilities Customer Care And Billing, Notification and Workflow records.

These adapters were compatible with Oracle Service Bus 11g. We have not patched these adapters to be compatible with new facilities in Oracle Service Bus 12c. The following patches must be applied:

 Version  Patch Number  4.2.0.3.0  22308653  4.3.0.0.1  21760629  4.3.0.1.0  22308684  

Debugging Kibana using Chrome developer tools

Pythian Group - Tue, 2016-03-08 17:53

Amazon Elasticsearch Service is a managed service to implement Elasticsearch in AWS. Underlying instances are managed by AWS and interaction with the service is available through API and AWS GUI.

Kibana is also integrated with Amazon Elasticsearch Service. We came across an issue which caused Kibana4 to show the following error message, when searching for *.

Courier Fetch: 10 of 60 shards failed.

Error is not very descriptive.

As Amazon Elasticsearch service is an endpoint only and we do not have direct access to the instances. We also have access to few API tools.

We decided to see what can be found from the chrome browser.

The Chrome Developer Tools (DevTools) contains lots of useful debugging possibilities.

DevTools can be started using several methods.

1. Right click and click Inspect.
2. From Menu -> More Tools -> Developer Tools
3. Press F12

Network tab under DevTools can be used to debug wide variety of issues. It records every requests made when a web page is loading. It captures wide range of information about every request like HTTP access Method, status and time took to complete the request etc.

By clicking on any of the requested resource, we will be able to get more information on the request.

In this case, the interesting bit was under the Preview tab. The Preview tab captures the data chrome got back from the search and store it as objects.

A successful query would look like the image below captured from Kibana3 of public website logstash.openstack.org.

kibana-es

We checked “_msearch?timeout=3000..” and received following errors messages under the nested values (For example “responses” -> “0” -> “_shards” -> “failures” -> “0”)

{index: “logstash-2016.02.24”, shard: 1, status: 500,…}index: “logstash-2016.02.24″reason: “RemoteTransportException[[Leech][inet[/10.212.25.251:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; “shard: 1status: 500

So the issue is clear, fielddata usage is above the limit.

As per Amazon documentation,

Field Data Breaker –
Percentage of JVM heap memory allowed to load a single data field into memory. The default value is 60%. We recommend raising this limit if you are uploading data with large fields.
indices.breaker.fielddata.limit
For more information, see Field data in the Elasticsearch documentation.

Following url documents the supported Amazon Elasticsearch operations.

http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-supported-operations.html

On checking the current heap usage (second column) of the data nodes, we can see that heap usage is very high,

$ curl -XGET “http://elasticsearch.abc.com/_cat/nodes?v”
host ip heap.percent ram.percent load node.role master name
x.x.x.x   10   85   0.00   –   m   Drax the Destroyer
x.x.x.x   7   85   0.00   –   *   H.E.R.B.I.E.
x.x.x.x   78   64   1.08   d   –   Black Cat
x.x.x.x   80   62   1.41   d   – Leech
x.x.x.x   7   85   0.00   –   m   Alex
x.x.x.x   78   63   0.27   d   –   Saint Anna
x.x.x.x   80   63   0.28   d   –   Martinex
x.x.x.x   78   63   0.59   d   –   Scorpio

Following command can be used to increase the indices.breaker.fielddata.limit value. This can be used as a workaround.

$ curl -XPUT elasticsearch.abc.com/_cluster/settings -d ‘{ “persistent” : { “indices.breaker.fielddata.limit” : “89%” } }’

Running the command allowed the kibana search to run without issues and show the data.

The real solution would be to increase the number of nodes or reduce the amount of field data that need to be loaded by limiting number of indexes.

AWS Lamda can be used to to run a script to cleanup indices as a scheduled event.

Categories: DBA Blogs

Partner Webcast – Oracle PaaS: Application Container Cloud Service

Oracle Application Container Cloud Service, a new Oracle Cloud Platform (PaaS) offering, leverages Docker containers and provides a lightweight infrastructure so that you can run Java SE and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Is SLOB AWR Generation Really, Really, Really Slow on Oracle Database 11.2.0.4? Yes, Unless…

Kevin Closson - Tue, 2016-03-08 16:36

If you are testing SLOB against 11.2.0.4 and find that the AWR report generation phase of runit.sh is taking an inordinate amount of time (e.g., more than 10 seconds) then please be aware that, in the SLOB/awr subdirectory, there is a remedy script rightly called 11204-awr-stall-fix.sql.

Simply execute this script when connected to the instance with sysdba privilege and the problem will be solved. 

11.2.0.4-awr-stall-fix.sql

 


Filed under: oracle Tagged: 11.2.0.4, Automatic Workload Repository, AWR, SLOB, SLOB Testing

Live to Win – Motorhead Covers and Pythonic Irrigation

The Anti-Kyte - Tue, 2016-03-08 15:31

The recent passing of Lemmy has caused me to reflect on on the career of one of the bands who made my growing up (and grown-up) years that much…well…louder.

Yes, I know that serious Python documentation should employ a sprinkling of Monty Python references but, let’s face it, what follows is more of a quick trawl through some basic Python constructs that I’ve found quite useful recently.
If I put them all here, at least I’ll know where to look when I need them again.

In any case, Michael Pailin made a guest appearance on the album Rock ‘n’ Roll so that’s probably enough of a link to safisfy the Monty Python criteria.

I find Python a really good language to code in…especially when the alternative is writing a Windows Batch Script. However, there is a “but”.
Python 3 is not backward compatible with Python 2. This can make life rather interesting on occasion.

It is possible to write code that is compatible with both versions of the language and there’s a useful article here on that topic.

The code I’ve written here has been tested on both Python 2 (2.7.6) and Python 3 (3.4.3).

One of the great things about Python is that there are a number of modules supplied as standard, which greatly simplify some common programming tasks.
What I’m going to run through here is :

  • Getting information about the environment
  • Handling runtime arguments with the argparse module
  • Reading config files with configparser
  • Writing information to log files with the logging module

Existential Questions

There are a number of questions that you’ll want to answer programatically, sooner rather than later…

Who am I

There’s a couple of ways to find out the user your connected as from inside Python.
You could simply use the os.getlogin() function…

import os
print( os.getlogin())

…but according to the official documentation [link] this is probably a better option…

import os
import pwd
print( pwd.getpwuid(os.getuid())[0])

Additionally, we may want to know the name of the Python program we’re currently in. The following script – called road_crew.py – should do the job :

import os
print(os.path.basename(__file__))

Running this we get :

road_crew.py
Where am I

Step forward the platform module, as seen here in this code (saved as hammersmith.py) :

import platform

def main() :
    # Get the name of the host machine
    machine_name = platform.node()
    # Get the OS and architecture
    os_type = platform.system()
    if platform.machine() == 'x86_64' :
        os_arch = '64-bit'
    else :
        os_arch = '32-bit'

    print('Running on '+machine_name+' which is running '+ os_type + ' ' + os_arch)

    # Now get more detailed OS information using the appropriate function...
    if os_type == 'Linux' :
        print(platform.linux_distribution())
    elif os_type == 'Windows' :
        print(platform.win32_ver())
    elif os_type == 'Mac' :
        #NOTE - I don't have a Mac handy so have no way of testing this statement
        print(platform.mac_ver())
    else :
        print("Sky high and 6000 miles away!")

if __name__ == '__main__' :
    main()

Running this on my Linux Mint machine produces :

Running on mike-TravelMate-B116-M which is running Linux 64-bit
('LinuxMint', '17.3', 'rosa')

As mentioned previously, you also may be quite keen to know the version of Python that your program is running on….

import sys

major = sys.version_info[0]
minor = sys.version_info[1]
micro = sys.version_info[2]

if major == 3 :
    print('Ace of Spades !')
else :
    print('Bomber !')

print('You are running Python ' + str(major) + '.' + str(minor) + '.' + str(micro))

On Python 3, this outputs…

Ace of Spades !
You are running Python 3.4.3

…whilst on Python 2…

Bomber !
You are running Python 2.7.6
When Am I

As for the current date and time, allow me to introduce another_perfect_day.py…

import time

today = time.strftime("%a %d %B %Y")
now = time.strftime("%H:%M:%S")

print("Today's date is " + today);
print("The time is now " + now);

…which gives us …

Today's date is Sun 06 March 2016
The time is now 19:15:19
Argument parsing

The argparse module makes handling arguments passed to the program fairly straightforward.
It allows you to provide a short or long switch for the argument, specify a default value, and even write some help text.
The program is called no_remorse.py and looks like this :

import argparse

parser = argparse.ArgumentParser()
parser.add_argument("-a", "--age", default = 40, help = "How old are you ? (defaults to 40 - nothing personal)")
args = vars(parser.parse_args())
age = args['age']
if int(age) > 39 :
    print('I remember when Motorhead had a number 1 album !')
else :
    print('Who are Motorhead ?')

The argparse gives us a couple of things. First of all, if we want to know more about the required parameters, we can simply invoke the help :

python no_remorse.py -h
usage: no_remorse.py [-h] [-a AGE]

optional arguments:
  -h, --help         show this help message and exit
  -a AGE, --age AGE  How old are you ? (defaults to 40 - nothing personal)

If we run it without specifying a value for age, it will pick up the default….

python no_remorse.py
I remember when Motorhead had a number 1 album !

…and if I’m tempted to lie about my age (explicitly, as opposed to by omission in the previous example)…

python no_remorse.py -a 39
Who are Motorhead ?

As well as using the single-letter switch for the parameter, we can use the long version …

python no_remorse.py --age 48
I remember when Motorhead had a number 1 album !

One other point to note, the program will not accept arguments passed by positon, either the long or short switch for the argument must be specified. Either that or Python comes with it’s own outrageous lie detector…

python no_remorse.py 25
usage: no_remorse.py [-h] [-a AGE]
no_remorse.py: error: unrecognized arguments: 25
Reading a config file

There are times when you need a program to run on multiple environments, each with slightly different details ( machine name, directory paths etc).
Rather than having to pass these details in each time you run the program, you can dump them all into a file for your program to read at runtime.
Usually, you’ll pass in an argument to point the program at the appropriate section of your config file. A config file will look something like this :

[DEV]
db_name = dev01

[TEST]
db_name = test01

[PROD]
db_name = prod

In this example, your program will probably accept an argument specifying which environment it needs to run against and then read the appropriate section of the config file to set variables to the appropriate values.

My working example is slightly different and is based on cover versions that Motorhead have done of other artists’ tracks, together with a couple of my favourite covers of Motorhead songs by other bands :

[MOTORHEAD]
Tammy Wynette = Stand By Your Man
The Kingsmen = Louie Louie

[METALLICA]
Motorhead = Overkill

[CORDUROY]
Motorhead = Motorhead

Now, you could spend a fair amount of time trying to figure out how to read this file and get the appropriate values…or you could just use the configparser module…

Conditional Import – making sure you find Configparser

The configparser module was renamed in Python3 so the import statement for it is different depending on which version of Python your using.
Fortunately, Python offers the ability to conditionally import modules as well as allowing you to alias them.
Therefore, this should solve your problem…

try:
    import configparser
except ImportError :
    import ConfigParser as configparser

So, if we’re running Python 3 the first import statement succeeds.
If we’re running Python2 we’ll get an ImportError, in which case we import the version 2 ConfigParser and alias it as configparser.
The alias means that we can refer to the module in the same way throughout the rest of the program without having to check which version we’ve actually imported.
As a result, our code should now run on either Python version :

try:
    import configparser
except ImportError :
    import ConfigParser as configparser

config = configparser.ConfigParser()
config.read('covers.cfg')

#Get a single value from the [CORDUROY] section of the config file
cover_artist = 'CORDUROY'
#Find the track they covered, originally recorded by Motorhead
# Pass the config section and the original artist ( the entry on the left-hand side of the "="
# in the config file
track = config.get(cover_artist, 'Motorhead')
# cover_artist and track are string objects so we can use the title method to initcap the output
print(cover_artist.title() + ' covered ' + track.title() + ' by Motorhead')

# Loop through all of the entries in the [MOTORHEAD] section of the config file
for original_artist in config.options('MOTORHEAD') :
    print('Motorhead covered ' + config.get('MOTORHEAD', original_artist) + ' by ' + original_artist.upper())

Run this and we get…

Corduroy covered Motorhead by Motorhead
Motorhead covered Stand By Your Man by TAMMY WYNETTE
Motorhead covered Louie Louie by THE KINGSMEN
Dead Men Tell No Tales

…but fortunately the Python logging module will let your programs sing like a canary.

As with the configparser, there’s no need to write lots of code to open and write to a file.
There are five levels of logging message supported :

  • DEBUG
  • INFO
  • WARNING – the default
  • ERROR
  • CRITICAL

There is a separate call to write each message type. The message itself can be formatted to include information such as a timestamp and the program from which the message was written. There’s a detailed how-to on logging here.

For now though, we want a simple program (logger.py) to write messages to a file wittily and originally titled logger.log…

import logging

logging.basicConfig(
    filename='logger.log',
    level=logging.INFO,
    format='%(asctime)s:%(filename)s:%(levelname)s:%(message)s'
)

logging.debug('No Remorse')
logging.info('Overnight Sensation')
logging.warn('March or Die')
logging.error('Bad Magic')

There’s no output to the screen when we run this program but if we check, there should now be a file called logger.log in the same directory which contains :

2016-03-06 19:19:59,375:logger.py:INFO:Overnight Sensation
2016-03-06 19:19:59,375:logger.py:WARNING:March or Die
2016-03-06 19:19:59,375:logger.py:ERROR:Bad Magic

As you can see, the type of message in the log depends on the logging member invoked to write the message.

If you want a more comprehensive/authoritative/coherent explanation of the features I’ve covered here, then have a look at the official Python documentation.
On the other hand, if you want to check out a rather unusual version of one of Motorhead’s signature tracks, this is definitely worth a look.


Filed under: python Tagged: argparse, configparser, logging, os.getlogin, os.path.basename, platform.node, platform.system, pwd.getpwuid, sys.version_info, time.strftime

Wrong Results

Jonathan Lewis - Tue, 2016-03-08 12:57

Just in – a post on the Oracle-L mailing lists asks: “Is it a bug if a query returns one answer if you hint a full tablescan and another if you hint an indexed access path?” And my answer is, I think: “Not necessarily”:


SQL> select /*+ full(pt_range)  */ n2 from pt_range where n1 = 1 and n2 = 1;

        N2
----------
         1
SQL> select /*+ index(pt_range pt_i1) */ n2 from pt_range where n1 = 1 and n2 = 1;

        N2
----------
         1
         1

The index is NOT corrupt.

The reason why I’m not sure you should call this a bug is that it is a side effect of putting the database into an incorrect state. You might have guessed from the name that the table is a (range) partitioned table, and I’ve managed to get this effect by doing a partition exchange with the “without validation” option.


create table t1 (
        n1      number(4),
        n2      number(4)
);

insert into t1
select  rownum, rownum
from    all_objects
where   rownum <= 5
;

create table pt_range (
        n1      number(4),
        n2      number(4)
)
partition by range(n1) (
        partition p10 values less than (10),
        partition p20 values less than (20)
)
;

insert into pt_range
select
        rownum, rownum
from
        all_objects
where
        rownum <= 15
;
create index pt_i1 on pt_range(n1,n2);

begin
        dbms_stats.gather_table_stats(
                ownname    => user,
                tabname    => 'T1',
                method_opt => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname    => user,
                tabname    => 'PT_RANGE',
                method_opt => 'for all columns size 1'
        );
end;
/

alter table pt_range
exchange partition p20 with table t1
including indexes
without validation
update indexes
;

The key feature (in this case) is that the query can be answered from the index without reference to the table. When I force a full tablescan Oracle does partition elimination and looks at just one partition; when I force the indexed access path Oracle doesn’t eliminate rows that belong to the wrong partition – though technically it could (because it could identify the target partition by the partition’s data_object_id which is part of the extended rowid stored in global indexes).

Here are the two execution plans (from 11.2.0.4) – notice how the index operation has no partition elimination while the table operation prunes partitions:


select /*+ full(pt_range)  */ n2 from pt_range where n1 = 1 and n2 = 1

---------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |       |       |     2 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|          |     1 |     6 |     2   (0)| 00:00:01 |     1 |     1 |
|*  2 |   TABLE ACCESS FULL    | PT_RANGE |     1 |     6 |     2   (0)| 00:00:01 |     1 |     1 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("N1"=1 AND "N2"=1))


select /*+ index(pt_range pt_i1) */ n2 from pt_range where n1 = 1 and n2 = 1

--------------------------------------------------------------------------
| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT |       |       |       |     1 (100)|          |
|*  1 |  INDEX RANGE SCAN| PT_I1 |     1 |     6 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("N1"=1 AND "N2"=1)


Note: If I had a query that did a table access by (global) index rowid after the index range scan it WOULD do partition elimination and visit just the one partition – never seeing the data in the wrong partition.

So is it a bug ? You told Oracle not to worry about bad data – so how can you complain if it reports bad data.

Harder question – which answer is the “right” one – the answer which shows you all the data matching the query, or the answer which shows you only the data that is in the partition it is supposed to be in ?


Can The Public Cloud Meet the Needs of Your Enterprise Applications?

Pythian Group - Tue, 2016-03-08 12:19

 

Any applications your company runs on premise can also be run in the public cloud. But does that mean they should be?

While the cloud offers well-documented benefits of flexibility, scalability, and cost efficiency, some applications — and especially business-critical enterprise applications — have specific characteristics that can make them tricky to move into a public cloud environment.

That’s not to say you shouldn’t consider the cloud as an option, but you should be aware of the following enterprise application needs before you make any migration decisions:

1. Highly customized infrastructure

Enterprise applications often rely on software components that are uniquely configured: they may need very specific storage layouts and security settings or tight integration with certain third-party tools. That makes it hard to replace them with generic platform-as-a-service (PaaS) alternatives in the cloud.
The same is true on the infrastructure side: application software components often need particular network configurations and controls that aren’t available from a typical infrastructure-as-a-service (IaaS) offering. (An example would be the way Oracle Real Application Clusters have to allow the cluster software to manipulate network settings, such as controlling IP addresses and network interfaces.)

2. Tightly coupled components

Today’s cloud application architectures are based on “microservices” — collections of services that perform specific tasks. When combined, these answer the whole of the application requirements. With enterprise applications, there are so many interdependencies between the various software components that it can be extremely difficult to change, upgrade, move, or scale an individual component without having a huge impact on the rest of the system.

3. Siloed IT departments

Enterprise applications are usually supported by siloed enterprise IT operations — DBAs, system administrators, storage administrators, network administrators and the like — each with their own responsibilities. Cloud deployment, on the other hand, requires much greater focus on collaboration across the IT environment. This means breaking down traditional silos to create full-stack teams with vertical application ownership. Some teams are likely to resist this change as they could end up with significantly less work and responsibility once the management of application components has shifted to the cloud vendor. So migrating to the cloud isn’t just a technical decision; it has people-process implications, too.

4. Costly infrastructure upgrades

Every company knows upgrading enterprise applications is a major undertaking and can often cause downtime and outages. This is true when the application stays inside your own data center — and doubly so when it moves to a cloud provider due to how long it takes to move massive amounts of data through the Internet and risks associated with unknown issues on the new virtual platform. For these reasons, significant financial commitment is often required to build and maintain an IT team with the right skills to do upgrades quickly and effectively as well as maintain the system.

5. Inflexible licensing models

The components used in enterprise applications are often proprietary products with licensing models that are not compatible with the elasticity of the cloud. For example, many Oracle licenses are for legacy applications and can used only on particular systems. Transferring those licenses to a cloud-based infrastructure is not an easy task.

In addition, perpetual software licenses are often not portable to the typical pay-as-you-go model used by most cloud providers. Plus, most software vendors don’t have any incentive to transition their customers from locked-in perpetual licenses with a steady maintenance revenue stream to a model that allows them to switch to a competitive product at any time.

Even though the nature of enterprise applications makes them difficult to migrate to the cloud, the benefits of doing so — in costs savings, availability, and business agility — still make it a very compelling proposition. In my next blog, I’ll take a look at some of the paths available to you should you decide to move your enterprise applications to the public cloud.

For more on this topic, check out our white paper on Choosing the Right Public Cloud Platform For Your Enterprise Applications Built on Oracle Database.

Oracle-White-Paper-Blog-CTA

 

Categories: DBA Blogs

Renaming #EM12c / #EM13c Targets

DBASolved - Tue, 2016-03-08 09:20

Oracle Enterprise Manager is a complex piece of software that many organizations are running now. Some organizations set out with a formalized naming standard; some do not. Those who do not ofter end up identifying a naming standard later down the road and then making requests to change the names of the targets being monitored. In order to do this, there are two ways:

1. Delete and rediscover the target and rename at time of discovery
2. Change the name from the backend using EMCLI

The first way is painful to say the least, especially when you have thousands upon thousands of targets. So this post is going to focus on how to change the name from the backend using EMCLI and a few other little tips.

EMCLI is a nice tool to use. It provides two options for renaming of targets. The first option is rename_target and the second is modify_target. The rename_target option is used to rename the target on the repository side, while the modify_target option is used to rename at the agent level. Both options are required when renaming a target because the target needs to stay in-sync to retain the history of the target.

To make this process a bit more automated, I’ve created a perl script that will do the renaming for me based on information in a space delimited flat file. The script is as follows:

#!/usr/bin/perl -w
use strict;
use warnings;

##########################
#Notes
##########################
#
#To help with renaming the entity_name in the repository database,
#comment out block of code in SYSMAN.EM_TARGET from line 8028 thru 8035
#
##########################
#GLOBAL Variables
##########################
my $oem_home_bin = "";
my $time_now = localtime();
my ($variable, $sysman_pwd) = @ARGV;
my $count = 0;
my @columns;

##########################
#Main Program
##########################

open (FILE, "< $variable") or die "$!\n";
@columns = ("", 0, 0, 0);
print "\nStart time: ".$time_now."\n";
emcli_login();
while()
{
	my $line = $_;
	@columns = split(' ',$line, 4);
	rename_target(@columns);
	$count = $count+1;
} #end file read
close (FILE) or die "$!\n";
my $end_time=localtime();
print "\nNumber of changes: ".$count;
print "\nEnd time: ".$end_time."\n";
emcli_logout();

##########################
#Sub-Programs
##########################
sub emcli_login{
	print "\n";
	system($oem_home_bin.'/emcli login -username=sysman -password='.$sysman_pwd);
	system($oem_home_bin.'/emcli sync');
	print "\n";
}

sub emcli_logout{
	print "\n";
	system($oem_home_bin.'/emcli logout');
	print "\n";
}

sub rename_target{
	#Parameters
	my ($target_name, $target_type, $server_name )=@columns;
	my $mod_target;
	my $new_name;
	my $cmd;
	my $cmd1;

	if ($target_type =~ /rac_database/)
	{
		chomp($target_name);
		chomp($server_name);
		$mod_target = $target_name;
		$target_name = substr($target_name, 0, -4);
		$new_name = $target_name."_".$server_name;
		#print $new_name;
		print "\n";
		$cmd = 'emcli modify_target -name="'.$mod_target.'" -type="'.$target_type.'" -display_name="'.$new_name.'" -on_agent';
		print $cmd."\n";
		#print "\n!!!!Executing on agent side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
		$cmd1 = 'emcli rename_target -target_type="'.$target_type.'" -target_name="'.$mod_target.'" -new_target_name="'.$new_name.'"';
		print $cmd1."\n";
		#print "\n!!!!Executing on repository side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
	}
}

Notice that I’m doing the renaming at the agent side along with the repository side. Although this looks pretty simple and straight forward, I’ve found that the EMCLI command to rename (rename_target) is actually driven by the package EM_TARGET in the SYSMAN schema. There is a small set of code in this package that will prevent renaming of certain target types if they are currently being monitored and managed by OEM.

To identify what targets are managed, the following SQL can be used:

SELECT ENTITY_TYPE, ENTITY_NAME, DISPLAY_NAME FROM EM_MANAGEABLE_ENTITIES 
WHERE ENTITY_TYPE='oracle_database' and promote_status=3 and manage_status=‘2';

The SQL above will provide you with the target type (entity_type), name (entity_name), and display name (display_name). These three columns are important because they directly correlate to what you will see in OEM. About 90% of the screen in OEM use the display_name column. The other 10% of the screens use the entity_name. When you start renaming, you will want these names to match, just keep in mind they may not over the long haul.

Now, back to the code in the EM_TARGET package. When renaming targets, some target will report back that the target cannot be changed. This is due to the target already being managed by OEM. In order to by-pass this, you need to update the EM_TARGET package body and comment out a small set of code (make sure you back up the package before doing anything). The lines of code that need to be commented out are between 8028 and 8035.

-- we will implement rename of agent side targets when it is fully
     -- supported by agent
    --IF ( l_trec.manage_status = MANAGE_STATUS_MANAGED AND
    --     l_trec.emd_url IS NOT NULL) 
    --THEN
    --  raise_application_error(MGMT_GLOBAL.INVALID_PARAMS_ERR,
    --      MGMT_GLOBAL.INVALID_PARAMS_ERR||' Not allowed') ;
    --END IF ;

After commenting out these lines of code, recompile the package. Then you will be able to rename repository targets using EMCLI even though they are already managed targets. This will effect the entity_name column and allow you to update the other 10% of pages that are not immediately changed.
 
Another way to change names of targets once the EM_TARGET package has been updated, is to use SQL to make the changes.

exec sysman.em_target.rename_target(target_type, current_name, new_name, new_name);
commit;

Once the commit has happened, then the OEM pages can be refreshed and the new entity_name will be displayed.

Well, I hope this has provided you some explanation on how to change existing targets within the EM framework.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Renaming #EM12c / #EM13c Targets

DBASolved - Tue, 2016-03-08 09:20

Oracle Enterprise Manager is a complex piece of software that many organizations are running now. Some organizations set out with a formalized naming standard; some do not. Those who do not ofter end up identifying a naming standard later down the road and then making requests to change the names of the targets being monitored. In order to do this, there are two ways:

1. Delete and rediscover the target and rename at time of discovery
2. Change the name from the backend using EMCLI

The first way is painful to say the least, especially when you have thousands upon thousands of targets. So this post is going to focus on how to change the name from the backend using EMCLI and a few other little tips.

EMCLI is a nice tool to use. It provides two options for renaming of targets. The first option is rename_target and the second is modify_target. The rename_target option is used to rename the target on the repository side, while the modify_target option is used to rename at the agent level. Both options are required when renaming a target because the target needs to stay in-sync to retain the history of the target.

To make this process a bit more automated, I’ve created a perl script that will do the renaming for me based on information in a space delimited flat file. The script is as follows:

#!/usr/bin/perl -w
use strict;
use warnings;

##########################
#Notes
##########################
#
#To help with renaming the entity_name in the repository database,
#comment out block of code in SYSMAN.EM_TARGET from line 8028 thru 8035
#
##########################
#GLOBAL Variables
##########################
my $oem_home_bin = "";
my $time_now = localtime();
my ($variable, $sysman_pwd) = @ARGV;
my $count = 0;
my @columns;

##########################
#Main Program
##########################

open (FILE, "< $variable") or die "$!\n";
@columns = ("", 0, 0, 0);
print "\nStart time: ".$time_now."\n";
emcli_login();
while()
{
	my $line = $_;
	@columns = split(' ',$line, 4);
	rename_target(@columns);
	$count = $count+1;
} #end file read
close (FILE) or die "$!\n";
my $end_time=localtime();
print "\nNumber of changes: ".$count;
print "\nEnd time: ".$end_time."\n";
emcli_logout();

##########################
#Sub-Programs
##########################
sub emcli_login{
	print "\n";
	system($oem_home_bin.'/emcli login -username=sysman -password='.$sysman_pwd);
	system($oem_home_bin.'/emcli sync');
	print "\n";
}

sub emcli_logout{
	print "\n";
	system($oem_home_bin.'/emcli logout');
	print "\n";
}

sub rename_target{
	#Parameters
	my ($target_name, $target_type, $server_name )=@columns;
	my $mod_target;
	my $new_name;
	my $cmd;
	my $cmd1;

	if ($target_type =~ /rac_database/)
	{
		chomp($target_name);
		chomp($server_name);
		$mod_target = $target_name;
		$target_name = substr($target_name, 0, -4);
		$new_name = $target_name."_".$server_name;
		#print $new_name;
		print "\n";
		$cmd = 'emcli modify_target -name="'.$mod_target.'" -type="'.$target_type.'" -display_name="'.$new_name.'" -on_agent';
		print $cmd."\n";
		#print "\n!!!!Executing on agent side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
		$cmd1 = 'emcli rename_target -target_type="'.$target_type.'" -target_name="'.$mod_target.'" -new_target_name="'.$new_name.'"';
		print $cmd1."\n";
		#print "\n!!!!Executing on repository side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
	}
}

Notice that I’m doing the renaming at the agent side along with the repository side. Although this looks pretty simple and straight forward, I’ve found that the EMCLI command to rename (rename_target) is actually driven by the package EM_TARGET in the SYSMAN schema. There is a small set of code in this package that will prevent renaming of certain target types if they are currently being monitored and managed by OEM.

To identify what targets are managed, the following SQL can be used:

SELECT ENTITY_TYPE, ENTITY_NAME, DISPLAY_NAME FROM EM_MANAGEABLE_ENTITIES 
WHERE ENTITY_TYPE='oracle_database' and promote_status=3 and manage_status=‘2';

The SQL above will provide you with the target type (entity_type), name (entity_name), and display name (display_name). These three columns are important because they directly correlate to what you will see in OEM. About 90% of the screen in OEM use the display_name column. The other 10% of the screens use the entity_name. When you start renaming, you will want these names to match, just keep in mind they may not over the long haul.

Now, back to the code in the EM_TARGET package. When renaming targets, some target will report back that the target cannot be changed. This is due to the target already being managed by OEM. In order to by-pass this, you need to update the EM_TARGET package body and comment out a small set of code (make sure you back up the package before doing anything). The lines of code that need to be commented out are between 8028 and 8035.

-- we will implement rename of agent side targets when it is fully
     -- supported by agent
    --IF ( l_trec.manage_status = MANAGE_STATUS_MANAGED AND
    --     l_trec.emd_url IS NOT NULL) 
    --THEN
    --  raise_application_error(MGMT_GLOBAL.INVALID_PARAMS_ERR,
    --      MGMT_GLOBAL.INVALID_PARAMS_ERR||' Not allowed') ;
    --END IF ;

After commenting out these lines of code, recompile the package. Then you will be able to rename repository targets using EMCLI even though they are already managed targets. This will effect the entity_name column and allow you to update the other 10% of pages that are not immediately changed.
 
Another way to change names of targets once the EM_TARGET package has been updated, is to use SQL to make the changes.

exec sysman.em_target.rename_target(target_type, current_name, new_name, new_name);
commit;

Once the commit has happened, then the OEM pages can be refreshed and the new entity_name will be displayed.

Well, I hope this has provided you some explanation on how to change existing targets within the EM framework.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Oracle Cloud – Managing the Database as a Service Cloud

Marco Gralike - Tue, 2016-03-08 09:05
So I am not working in the Public Cloud for a week and last night…

Sources of Inspiration on International Women’s Day

Pythian Group - Tue, 2016-03-08 08:58

True inspiration comes in many forms and I consider myself fortunate to be inspired every day by the women around me. International Women’s Day is the perfect opportunity to reflect on the women in our lives who positively influence us.

This post is my heartfelt thank you to the women in my ‘circle’ who have made an indelible mark in my life. They are women who continue to inspire, challenge and motivate me.

The women on Pythian HR’s team: These women continually teach me valuable life lessons. They are mothers, partners, sisters, care providers, aunts, cousins, and friends to many. They are strong, spirited, supportive and have generous natures that are contagious. They demonstrate an unwavering commitment to working hard, they’re incredibly talented and they have a steady focus on doing what’s best for our employees. These women go above and beyond and approach every puzzle with optimism.

My mother:  My mother is the most positive and ‘glass half full’ person that I know. She is a person who never fails to find the bright side to life’s most thought-provoking issues and one of her favourite questions to ask her loved ones is “Are You Happy?” (Spoiler alert: she’s not satisfied unless the answer is a truthful “yes”). Her love, guidance and support have helped sustain me through so much and over the years she has evolved into my BFF.

My friend, Jen:  Jen is a breast cancer survivor who decided to fight back and co-found Vixens Victorious. In October 2015, the dynamic duo of Vixens Victorious successfully launched Lights! Camera! CURE! which showcases female film makers from Canada and the proceeds go to support the Ottawa Regional Cancer Society. Jen’s positive spirit and take charge attitude empowers everyone who meets her.

My friend, Kate:  Kate moved to Canada with her three month old daughter to start a new journey with her husband. She took the initiative to make new friends, develop a network and often navigate a new city on her own when her partner travelled for work. Kate isn’t one to complain about adversities in life; she is courageous and gratefully embraces her adventure.

My fitness trainer JulesJules gets out of bed every morning, puts on her workout gear and travels across Ottawa to provide the most fun and effective workouts to her clients. She generously shares her own personal health journey and always finds a way to connect with her clients so they can experience the one on one attention they need. She is full of greatness.

Our family physician, Dr. Judy:  Dr. Judy’s medical practice is thriving because of her commitment to patient care. She ensures you are her priority in the moments that you are with her. She makes each of her patients feel important, cared for and heard. Dr. Judy emulates a kind and caring nature that everyone could benefit from.

My neighbor, Anne Marie:  In her late forties, Anne Marie taught herself to swim so she could begin competing in triathlons. She now travels internationally to compete in races. I’m inspired by her hard work, determination and strategic ability to set and meet goals.

The influences (sometimes subtle) of these women make an impact on how I choose to live my life. I am thankful for all of them.

On this International Women’s Day, I encourage you to think about who inspires you and why. Bonus points if you honour them with words of appreciation!

Categories: DBA Blogs

Stereo Pioneer Marantz and All Nippon Airways Drive Innovation and Growth with Oracle Service Cloud

Linda Fishman Hoyle - Tue, 2016-03-08 08:47
Stereo pioneer Denon and Marantz (now D+M Group) is using Oracle Service Cloud to change its customer experience tune for the wireless world. With the latest customer service solutions, D+M Group is delivering proactive service experiences, while addressing IoT and mobile market trends. For instance, during a recent new product launch they were able to register the new speakers via a smartphone app, monitor early adopters’ feedback, identify glitches, and address any issues before they grew. The contact center cloud platform, Oracle Service Cloud, helps D+M Group not only provide proactive service, but deliver next-generation service across growing digital channels, as well as traditional email, web self-service, and phone to meet two very different customer expectations.

In addition, All Nippon Airways is using Oracle Service Cloud to maintain impactful growth during a population decline. It is now more important than ever for the organization to deliver differentiated customer service to win new customers, increase revenue, while reducing support costs. With Oracle Service Cloud, ANA has built an integrated customer service platform that manages customer inquiries using various means of communication, including email and chat. Now, 70 percent of passengers book domestic flights on ANA’s website without getting on the phone with a customer service agent. Not to mention that this year ANA was rated among the top five airlines in the world for customer service!

For more information, check out the Marantz and All Nippon Airways Forbes profiles, as well as the recent Oracle CX blog post Forrester Names Oracle a ‘Leader’ in Customer Service Solutions for Midsize Teams and Enterprise Organizations.

OUG Ireland 2016 – Summary

Tim Hall - Tue, 2016-03-08 08:05

oug-ireland-2016

The day started at 05:00. I lay in the bath for 20 minutes in denial, wondering how I would manage to stay awake for the day. I’ve been ill for ages, so I felt like I was running on empty anyway. Once I had managed to drag myself out of the bath and get dressed, I picked up my laptop and took a taxi to the airport.

The taxi to the airport was smooth enough. I was already checked in and had no bags to drop off, so I went straight for the security and was greeted by the biggest queue I had ever seen at Birmingham airport. To all those people that laugh at me getting to the airport 2+ hours before a short flight like this I say, “Better to be safe than sorry!”

Despite the massive queue for security, populated by people who didn’t understand commands like, “Belts off!”, and, “All liquids out of your bags!”, the queue moved quite quickly and the departure area felt relatively quiet. I grabbed some food and logged into work to find one of the DW loads had failed. I cleaned stuff up and reset it. As I was boarding I passed one of my colleagues who was off to Glasgow for a product user group. I shouted across that his DW load had failed, then turned the corner to board before he could quiz me further. :)

The ChavAir flight was fine. They are a basic bitch airline, but you can’t really complain when you are paying £27 for a return flight. I overheard three people saying they paid £20 return. I was robbed. :)

When I arrived in Dublin, I got the AirLink Express into the city, which was 10 Euros for a return ticket and dropped me off about 100 yards from the Gresham Hotel. Bonus!

After signing in and saying hello to a couple of people, including the wife, it was off to the first session. My timetable for the day was:

  • Marcin Przepiorowski with “Looking for Performance Issue in Oracle SE. Check What OraSASH Can do for You”. I’m lucky enough to have Oracle EE with the Diagnotics and Tuning pack for all the databases I work with, so I get to use the real ASH and the performance pages in Cloud Control. Even so, it’s worth keeping your eye on what others are doing, as you never know when you will need it!
  • Carl Dudley with “SQL Tips, Techniques and Traps”. I really enjoyed this session. It was a quick pace with lots of little and interesting points. I’m sure everyone picked up something they had not heard before. I know I did.
  • Oren Nakdimon with “Write Less (Code) with More (Oracle 12c New Features)”. This was another quick paced session made up of lots of little pointers. As I watched it I found myself thinking, “Have I written about that?”, or, “Did I include that in my article?”. There were certainly a few things that had passed me by during my time with 12c, so I made a note about them and will be revisiting a couple of articles. It was a really neat session!
  • Keith Laker with “SQL Pattern Matching Deep Dive”. I’ve written some stuff on pattern matching, but this was another level. After watching this session I know enough to know I don’t know enough. :) Definitely a subject I need to go back and revisit. I’m always a little nervous of deep dive sessions because often they don’t deserve that title. I think this one did! :)
  • Me with “Analytic Functions: An Oracle Developer’s Best Friend”. This was in the same room as Keith’s talk and had most of the same audience. I started by saying something to the tune of, if you understood the stuff from the previous session, you probably don’t need to watch this one. :) My analytics session is quite different to ones I’ve seen others do. It is an entry level session, where I repeatedly reference non-analytics stuff to try and simplify the concepts and syntax. If you have done lots of analytics it’s probably not for you, but I always get some comments from people saying they use analytics, but didn’t realise what some of the stuff did.
  • Me with “Oracle Database Consolidation: It’s Not All About Oracle Database 12c!”. This is an overview session where I discuss the methods of database consolidation I use along with their pros and cons. I don’t dislike any individual method of database consolidation, but I do react harshly to anyone who claims one method is superior. There is no one-size-fits-all solution to database consolidation and anyone that tells you there is is a bloody liar! You will always need a combination of approaches and this is very much my message here. It’s a light and fluffy session, which probably fits quite well towards the end of the day when everyone is fried. :)
  • Cloud Q&A Panel Session. I mostly turned up to support the wife, but it was actually quite relevant to my current company, who are in the procurement phase of a replacement for many of our core business systems, with “the cloud” being an option. Added to that, I’ve been doing POCs of Azure, AWS and Oracle Cloud recently for IaaS and PaaS.

From there is was a quick chat with some folks at the social event, then the AirLink Express back to Dublin Airport.

The flight back was fine, but I was starting to feel really worse for wear. At one point I thought I was going to puke, but I managed not to. I was imagining everyone else thinking I had been for a day on the lash in Dublin. :) We landed early and I got a taxi home and the day was done!

Big thanks to OUG Ireland for inviting me to the day. Sorry I couldn’t stay for the second day! Thanks to the other speakers and attendees, who are collectively the most important people there! Thanks to the Oracle ACE Program for letting me continue to fly the flag!

For anyone that is looking for a new conference to try out, you should give OUG Ireland 2017 a go. Just so you know, here is the breakdown of the travel costs for my day trip:

  • Taxi to airport: £25
  • Return flight between Birmingham and Dublin: £27
  • Return trip on AirLink Express into the city: 10 Euros
  • Taxi home: £35
  • Total: < £100

The costs have been similar for the last three years and it’s certainly something I’m happy to pay out of my own pocket!

See you all next year!

Cheers

Tim…

OUG Ireland 2016 – Summary was first posted on March 8, 2016 at 3:05 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pages

Subscribe to Oracle FAQ aggregator