Feed aggregator

A Better Diff

Michael Dinh - Thu, 2016-03-03 22:25

I have been working on simplifying, perfecting, and comparing RMAN backup scripts.

The typical diff file1 file2 was not useful as I wanted a complete picture.

Look at what I found!

$ diff -iwyB --suppress-common-lines -W 150 rman_bkupinc.sh rman_bkuparc.sh;echo
# RMAN database incremental backup                                        |     # RMAN archivelog backup
# rman_bkupinc.sh                                                         |     # rman_bkuparc.sh
# Shell script calls bkupinc.rman at at SCRIPT_DIR location               |     # Shell script calls bkuparc.rman at at SCRIPT_DIR location
SID=${1:?"---> USAGE: $DN/$BN -ORACLE_SID -LEVEL"}                        |     SID=${1:?"---> USAGE: $DN/$BN -ORACLE_SID"}
LVL=${2:?"---> USAGE: $DN/$BN -ORACLE_SID -LEVEL"}                        <
DAY=`date '+%bW%U'`                                                       |     DAY=`date '+%aH%H'`
RMAN_LOG=$LOG_DIR/`echo $BN|cut -d'.' -f1`.$ORACLE_SID.L$2.$DAY.log       |     RMAN_LOG=$LOG_DIR/`echo $BN|cut -d'.' -f1`.$ORACLE_SID.$DAY.log
rman @${RMAN_SCRIPT} ${LVL} msglog $RMAN_LOG                              |     rman @${RMAN_SCRIPT} msglog $RMAN_LOG

$ diff -iwy -W 150 rman_bkupinc.sh rman_bkuparc.sh;echo
#!/bin/sh -ex                                                                   #!/bin/sh -ex
# Michael Dinh: Mar 03, 2016                                                    # Michael Dinh: Mar 03, 2016
# RMAN database incremental backup                                        |     # RMAN archivelog backup
# rman_bkupinc.sh                                                         |     # rman_bkuparc.sh
# Shell script calls bkupinc.rman at at SCRIPT_DIR location               |     # Shell script calls bkuparc.rman at at SCRIPT_DIR location

DN=`dirname $0`                                                                 DN=`dirname $0`
BN=`basename $0`                                                                BN=`basename $0`
SID=${1:?"---> USAGE: $DN/$BN -ORACLE_SID -LEVEL"}                        |     SID=${1:?"---> USAGE: $DN/$BN -ORACLE_SID"}
LVL=${2:?"---> USAGE: $DN/$BN -ORACLE_SID -LEVEL"}                        <

set -a                                                                          set -a
### Edit for proper location                                                    ### Edit for proper location
SCRIPT_DIR=/media/sf_working/rman                                               SCRIPT_DIR=/media/sf_working/rman
LOG_DIR=/tmp                                                                    LOG_DIR=/tmp
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin               PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
LD_LIBRARY_PATH=/lib:/usr/lib                                                   LD_LIBRARY_PATH=/lib:/usr/lib

### Edit resync_catalog.rman                                                    ### Edit resync_catalog.rman
### Uncomment if catalog is being used                                          ### Uncomment if catalog is being used
# RESYNC_CATALOG=$SCRIPT_DIR/resync_catalog.rman                                # RESYNC_CATALOG=$SCRIPT_DIR/resync_catalog.rman

ORACLE_SID=$1                                                                   ORACLE_SID=$1
ORAENV_ASK=NO                                                                   ORAENV_ASK=NO
. oraenv                                                                        . oraenv
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"                                        NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
DAY=`date '+%bW%U'`                                                       |     DAY=`date '+%aH%H'`
RMAN_SCRIPT=$SCRIPT_DIR/`echo $BN|cut -d'.' -f1|cut -c6-`.rman                  RMAN_SCRIPT=$SCRIPT_DIR/`echo $BN|cut -d'.' -f1|cut -c6-`.rman
RMAN_LOG=$LOG_DIR/`echo $BN|cut -d'.' -f1`.$ORACLE_SID.L$2.$DAY.log       |     RMAN_LOG=$LOG_DIR/`echo $BN|cut -d'.' -f1`.$ORACLE_SID.$DAY.log
TMPLOG=/tmp/`echo $BN|cut -d'.' -f1`_$ORACLE_SID.log                            TMPLOG=/tmp/`echo $BN|cut -d'.' -f1`_$ORACLE_SID.log
ERRLOG=/tmp/`echo $BN|cut -d'.' -f1`_$ORACLE_SID.err                            ERRLOG=/tmp/`echo $BN|cut -d'.' -f1`_$ORACLE_SID.err
set +a                                                                          set +a

# Lock file                                                                     # Lock file
exec 200>/tmp/$BN.lck                                                           exec 200>/tmp/$BN.lck
flock -n 200 || exit 1                                                          flock -n 200 || exit 1

# RMAN archivelog backup                                                        # RMAN archivelog backup
rman @${RMAN_SCRIPT} ${LVL} msglog $RMAN_LOG                              |     rman @${RMAN_SCRIPT} msglog $RMAN_LOG

### Uncomment if catalog is being used                                          ### Uncomment if catalog is being used
# rman @${RESYNC_CATALOG} msglog $RMAN_LOG append                               # rman @${RESYNC_CATALOG} msglog $RMAN_LOG append

cp -v $RMAN_LOG $TMPLOG                                                         cp -v $RMAN_LOG $TMPLOG
egrep -i '^rman-|^ora-|error|fail' $RMAN_LOG > $ERRLOG                          egrep -i '^rman-|^ora-|error|fail' $RMAN_LOG > $ERRLOG
exit                                                                            exit


$ ll /tmp/rman*.*
-rw-r--r--. 1 oracle oinstall     0 Mar  3 17:19 /tmp/rman_bkuparc_hawklas.err
-rw-r--r--. 1 oracle oinstall  7910 Mar  3 17:19 /tmp/rman_bkuparc_hawklas.log
-rw-r--r--. 1 oracle oinstall  7910 Mar  3 17:19 /tmp/rman_bkuparc.hawklas.ThuH17.log
-rw-r--r--. 1 oracle oinstall     0 Mar  3 17:19 /tmp/rman_bkuparc.sh.lck
-rw-r--r--. 1 oracle oinstall     0 Mar  3 19:20 /tmp/rman_bkupinc_hawklas.err
-rw-r--r--. 1 oracle oinstall 15315 Mar  3 19:20 /tmp/rman_bkupinc.hawklas.L1.MarW09.log
-rw-r--r--. 1 oracle oinstall 15315 Mar  3 19:20 /tmp/rman_bkupinc_hawklas.log
-rw-r--r--. 1 oracle oinstall     0 Mar  3 19:20 /tmp/rman_bkupinc.sh.lck

CMSWire Webinar Q&A: Best Practices for Digital Experience Adoption

WebCenter Team - Thu, 2016-03-03 16:04

Thank you to everyone who joined us last Thursday on our live webcast: Best Practices for Digital Experience Adoption; we appreciate your interest and the great Q&A that followed! For those who missed it or who would like to watch it again, we now have the on-demand replay available for the webcast here.


On the webcast, Lori Alcala, B2B Content Strategist and Writer and Kellsey Ruppel, Sr. Product Marketing Manager at Oracle had an interactive discussion that examined the results of a major collaboration, digital experience and engagement survey. Audience members were able to discover how their industry peers create and manage their content through multiple channels and drive their digital efforts for greater connections with customers, employees, partners and the general public. Kellsey and Lori talked about survey results from over 150 respondents, gave best practices for self-service and business automation and discussed social collaboration and content drivers.

We also wanted to capture some of the most asked questions and our experts' responses here. But please do follow up with us and comment here if you have additional questions or feedback. We always look forward to hearing from you.

Q: Is there a way to give LoB managers autonomy for small, local projects without complicating and jeopardizing the information assets of the enterprise? 

Kellsey: Yes there is. And this is something we get asked quite frequently from our customers as they are looking to resolve outstanding technology needs without IT involvement. From marketing and sales to HR and finance, LoB managers are looking for nimble technology solutions that they can deploy quickly and easily, while maintaining security and governance. As we discussed today, a comprehensive cloud collaboration solution will allow you to securely and effectively collaborate with employees and your external ecosystem, share and access work documents that are tied to business process and even your applications at anytime, from anywhere and on any device, allowing you to drive business process automation and decision making on the go and power collaborative, more relevant communications.

Q: In theory, it seems that giving marketing or other departments more control over their websites will help them see various benefits. But are businesses really ready to make that shift?

Lori: This is actually a question we asked in the survey – If your organization could create plug-and-play websites, web apps or other web experiences without relying on IT, would colleagues in your business unit be willing to do so? We found that more than half (54%) said yes, they’d be willing as long as they had proper training. 18% were interested but said they didn’t have the capability in their department, and only 10% said they’re not interested. So, it looks like organizations are willing to move in this direction providing they have the parameters in place to do so.

Q: We definitely need to know more about  how to get more stakeholders into the system. So what is your advice for bringing this kind of digital engagement strategy to upper management? How do you get them more involved in the process of starting something like this? 

Kellsey: So I think this is a great question and this is probably one that everybody is probably thinking like “yeah this is really great technology!” and “this is all good in theory, but how do you actually get buy-in? And once you actually do get that buy-in and implement it, how does it all work?” I think we need to first take a step back, and we talked earlier about the benefits of being able to bring together disparate systems to collaborate anywhere and simplify that process automation and being able to communicate better. You can illustrate that as we did today and even take these survey results to your upper management and show them that these are the challenges that not only we are facing, but hundreds of other people are seeing them as well. And I think once you can effectively communicate, everybody already knows it's a challenge, but communicating those benefits and showing how other customers are achieving those results -- saving time, saving money, I think that’s a really good place to start.

Lori: Yeah, so like with any initiative, you want to come to the table with some evidence. So as Kellsey said, maybe take some of the data that we have from the survey and showing your upper management that having these kinds of systems in place can really help you collaborate and communicate better internally. Having some numbers is always very helpful in being able to sway upper management to get those decisions made.

Q: Do you have recommendations on strategy that fosters staff adoption of new collaborative systems? How do you get everyone on your teams to align themselves on this collaborative system? 

Lori: With any initiative again, when you're trying to get this kind of adoption it's really important to speak to the employees and let them know what's in it for them. So if there's a new system, how is it going to help them? And how do the efforts that result from this contribute to the larger strategic objectives of the larger organization? We see that a lot of employees are very disconnected from their companies and there are all kinds of studies that talk about disengagement of today's employees. And that’s because they don't know what they are and what their efforts with how they are contributing to where their company is going. So I think for upper management it's really important that they are kind of stepping up those communication efforts to let people know what division is, what the messaging is, so that everybody is on the same page. They understand what it is that they are contributing to the overall company, and then how these kinds of tools can help them to do those things much better. 

Kellsey: And what I have found with the different technologies that are rolled out, is to start small. Start with a small pilot project and see how that works -- how are the users resonating with it? What are the capabilities? What other enhancements can be made? Because you can really learn and grow from that, and then finding a champion is another recommendation I have. Find one individual or a team that really knows things and will get their hands dirty. Someone who knows the technology and then they can be the poster child and train others and really be the champion of that project. 

Q. Are there concerns or issues with allowing LOB departments the power of creating their own websites?

Kellsey: Whether it’s a CFO calculating sales commissions or a marketing manager tracking ad campaigns, “shadow IT projects” are springing up throughout the enterprise. These departmental deployments have the advantage of speed, but without a comprehensive platform to structure these initiatives, the organization as a whole can quickly find itself with disconnected technology silos that complicate IT security, administration, and maintenance. Some business units invest in short-term solutions without considering the long-term ramifications of integration, scalability, performance, and reliability. Here is where the Oracle Cloud Platform and specifically our Documents, Process and Sites Cloud Services can help as they empower business users to create micro-communities that bring together content, people, and ideas. You can standardize on Oracle’s secure platform for content management, file sharing, website creation, and workforce collaboration. It streamlines access to content and business processes via web, desktop, mobile, and off-line options.  

Q: How can you best ensure brand consistency when empowering your marketing and communications department with these easy to build CMS's and portals?

Kellsey: So with that, I think that you need to make sure that you have processes in place up front so that you aren't running into those discrepancies. Because as you know, that can be very detrimental to your brand if there isn't that consistency across different channels that we talked about today. So putting those processes in place, having workflow, and then being able to automate those processes so that it's not something that is a headache or nightmare for these departments.

Lori: And I can add to that. The processes are very important and part of that is making sure that you are monitoring it and not just putting a process in place in and letting it go. And automation will really help, but in making sure that people are adhering to those processes and are adhering to any documentation that you provided will really help kind of keep all of the branding in line. 

Q: How do you adjust your strategy when you have a company using mobile tools to collaborate? Do you need to adjust for mobile use? 

Lori: Yeah I have just read a couple of studies, there was a B2B content preferences study that talks about how you know how people are now asking for and putting a very high priority on content that is mobile enabled. So it it's really important that your company, when you're developing content, that you think about the kind of these short and easy to digest little bites that you can publish your content. And then there are all kinds of technical things that maybe Kellsey can speak to, but I remember reading an article little while back about how of employees bring their own devices to the work place and there's this concern about security and that the issue is not really that device itself but the software that people are bringing in. So Kellsey, maybe you can talk a little bit about that concern.

Kellsey: Yes, absolutely.  And I think that is interesting that you bring up bring your own device because I was listening to your response and that was the first thing that popped in my head as well. It’s very important that companies do realize that they need to have a mobile component when they’re delivering these collaborative tools because we all have a mobile device, that’s just now the nature of the world we live in. So definitely being able to scale on a mobile device is critical and very important. And so, being able to adjust for mobile use is all about trial and error. Being able to create that software, try it on the mobile device, make sure it renders properly, and just recognizing the fact that you need to have mobile is critical.

If you were unable to join us for the webcast, you can view the on demand version here

March 9: Restoration Hardware―Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Thu, 2016-03-03 10:50
Join us for an Oracle HCM Cloud Customer Forum call with John Martinez, Senior Director Rewards & People Operations, and Ryan Drew, ‎Senior Leader, People Operations & Analytics from Restoration HardwareThey will talk about how Restoration Hardware needed a modern HR system that could provide its executives with a strategic view of its people management, which was a critical component of its retail strategy to support the company’s fast growth.

Register now to attend the live Forum on Wednesday, March 9, 2016, at 9:00 a.m. PT and learn more about Restoration Hardware’s experience with Oracle HCM Cloud.

2016 Annual Pythian MySQL Community Dinner

Pythian Group - Thu, 2016-03-03 10:26

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members since many of you will be in town for Percona Live that week. Here are the details:

What: The MySQL Community Dinner

When: Tuesday April 19, 2016 –  7:00 PM at Pedro’s (You are welcome to show up later, too!)

Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

Cost: Tickets are $40 USD, Includes Mexican buffet, non-alcoholic drinks, taxes, and gratuities (see menu)

How: Purchase your ticket below or RSVP through Eventbrite

Pythian Attendees:

Derek Downey
Alkin Tezuysal
Okan Buyukyilmaz
Emanuel Calvo
John Schulz
Martin Arrieta
Gabriel Cicilliani
Christos Soulios
Theresa Nova

Categories: DBA Blogs

Connecting to a pluggable database -12c

Learn oracle 12c database management - Thu, 2016-03-03 10:05
******* ********** *********
USING EZCONNECT:
******* ********** *********

[oracle@Linux03 ~]$ sqlplus pdborcl1/xxxxxx@localhost:1521/pdborcl1

SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 3 09:12:17 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> SHOW CON_NAME

CON_NAME
------------------------------
PDBORCL1

******** ********* *********
USING TNSNAMES:
******** ********* *********

Create a tnsnames entry for the pdb that you want to connect

[oracle@Linux03 admin]$ cat tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.1.0.2/db_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

pdborcl1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = Linux03)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
 



[oracle@Linux03 ~]$ sqlplus pdborcl1/xxxxx@pdborcl1

SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 3 09:15:10 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> SHOW CON_NAME

CON_NAME
------------------------------
PDBORCL1

Categories: DBA Blogs

MySQL on FreeBSD: old genes

Pythian Group - Thu, 2016-03-03 10:02

Maintaining mission critical databases on our pitchfork wielding brother, the “Daemon” of FreeBSD, seems quite daunting, or even absurd, from the perspective of a die-hard Linux expert, or from someone who has not touched it in a long time. The question we ask when we see FreeBSD these days is “why?”.  Most of my own experience with FreeBSD was obtained 10-15 years ago.  Back then, in the view of the team I was working on, a custom compiled-from-source operating system like FreeBSD 5.x or 6.x was superior to a Linux binary release.

Package managers like YUM and APT were not as good.  They did not always perform MD5 checks and use SSL like today’s versions. RedHat wasn’t releasing security updates 5 minutes after a vulnerability was discovered. Ubuntu didn’t exist. Debian stable would get so very old before receiving a new version upgrade. FreeBSD was a great choice for a maintainable, secure, free open source UNIX-like OS with tight source control and frequent updates.

Most people do not understand why FreeBSD remains a great choice for security and stability. The main reason is that the entire source of the base OS and the kernel (not just the kernel) are tightly maintained and tested as a whole, monolithic, distribution.

FreeBSD 10.2 is different than versions I worked on many years ago, in a good way, at least from the standpoint of getting started. First, “pkg” has gotten quite an overhaul, making installing packages on FreeBSD as easy as with YUM or APT.  portsnap and portmaster make port upgrades much easier than they used to be. freebsd-update can take care of wholesale updates of the operating system from trusted binary sources without having to “build the world”. These are welcome changes; ones that make it easier to get to production with FreeBSD, and certainly made the task of rapidly building and updating a couple of “lab” virtual machines easier.

In my effort to get re-acquainted with FreeBSD, I hit some snags. However, once I was finished with this exercise, FreeBSD had re-established itself in my mind as a decent flavor to host a mission critical database on. Open Source enthusiasts should consider embracing it without (much) hesitation. Is there some unfamiliar territory for those who only use MySQL on MacOS and Linux? Sure. But it is important to remember that BSD is one of the oldest UNIX like operating systems. The OSS world owes much heritage to it. It is quite stable and boring, perhaps even comfortable in its own way.

Problem 1: forcing older versions of MySQL

I needed to install MySQL 5.5 first, in order to test a mysql upgrade on FreeBSD.  However, when installing percona-toolkit either via “pkg install” (binary) or /usr/ports (source), the later 5.6 version of the mysql client would inevitably be installed as a dependency. After that point, anything relating to MySQL 5.5 would conflict with the 5.6 client. If I installed in the opposite order, server first, percona-toolkit second, the percona-toolkit installation would ask me if it is OK to go ahead and upgrade both server and client to 5.6.

TIP: don’t forget make.conf

my /etc/make.conf:
MYSQL_DEFAULT?= 5.5

Once I added MYSQL_DEFAULT into make.conf, the installations for MySQL 5.5 became seamless. Note: if you want another flavor of MySQL server such as Percona Server, install the server “pkg install percona55-server” prior to “pkg install percona-toolkit” so that the client dependencies are met prior to installation.

Problem 2: Some tools don’t work

pt-diskstats does not work, because it reads from /proc/diskstats, which does not exist on FreeBSD. Other favorites like htop don’t work right out of the box. So far I have had good luck with the rest of the Percona toolkit besides pt-diskstats, but here’s how you get around the htop issue (and perhaps others).

TIP: Get the linux /proc mounted

dynamic commands:
# kldload linux
# mkdir -p /compat/linux/proc
# mount -t linprocfs linproc /compat/linux/proc

to make permanent:
# vi /boot/loader.conf (and add the following line)
linux_load="YES"
# vi /etc/fstab (and add the following line)
linproc /compat/linux/proc linprocfs rw 0 0

As you may have determined, these commands ensure that the linux compatibility kernel module is loaded into the kernel, and that the linux style /proc is mounted in a different location than you might be used to “/compat/linux/proc”. The FreeBSD /proc may also be mounted.

Problem 3: I want bash

# pkg install bash
… and once that’s done
# pw user mod root -s /usr/local/bin/bash
…and repeat ^^ for each user you would like to switch. It even comes with a prompt that looks like CentOS/RHEL.
[root@js-bsd1 ~]#

Problem 4: I can’t find stuff

BSD init is much simpler than SysV and upstart init frameworks so your typical places to look for start files are /etc/rc.d and /usr/local/etc/rc.d. To make things start on boot, it’s inevitably a line in /etc/rc.conf.

In our case, for MySQL, our start file is /usr/local/etc/rc.d/mysql-server. To have MySQL start on boot, your rc.conf line is:
mysql_enable="YES".

If you do not wish to make MySQL start on boot, you may simply say "/usr/local/etc/rc.d/mysql-server onestart"

Notes on binary replacement

Please note, just like in the Linux world, MariaDB and Percona Server are drop in replacements for MySQL so, the startfiles and enable syntax does not change. Your default location for my.cnf is /etc/my.cnf just like in the rest of the known universe.

This command lists all installed packages.
pkg info -a

use pkg remove and pkg install to add new versions of your favorite mysql software.

I ran into no greater issues with pkg than I would with yum or apt doing binary removals and installations, and no issues at all with mysql_upgrade. Remember: If you had to alter make.conf like I did earlier, remember to update it to reflect versions you want to install.

For those who like ZFS, the FreeBSD handbook has a very detailed chapter on this topic. I for one like plain old UFS. It might be the oldest filesytem that supports snapshots and can be implemented very simplistically for those who like low overhead.

Happy tinkering with FreeBSD and MySQL, and thanks for reading!

Categories: DBA Blogs

Sydney Gets New AWS Availability Zone

Pythian Group - Thu, 2016-03-03 09:47

On a scorching November day in 2012, Sydneysiders were bracing themselves for yet another heat wave when all of a sudden they became pleasantly surprised as an elastic cloud occupied the tech skies. On November 12, 2012, Amazon announced  the New Asia Pacific (Sydney) Region in Australia.

Before that, Australian customers had to reach out to Japan or Singapore for their cloud needs. That was not really feasible, as it increased up-front expenses, long-term commitments, and scaling challenges. Amazon recognized that and Sydney became another region in the world.

They have now taken it a step further. They have rendered a new Availability Zone (AZ) in Sydney. Availability zone (AZ) is basically an isolated location within data centre regions from which public cloud services originate and operate.

The new availability zone is ap-southeast-2c. This is all set to provide enhanced performance and sociability to Australian customers. This will enable them to fully leverage the potential of technologies like Lambda, the Elastic File System shared filesystem, and Amazon RDS for MS SQL Server.

Pythian’s established presence in Australia and New Zealand coupled with round the clock and world class support for AWS, SQL Server, and other cloud technologies, enables it to support Australian and New Zealand customers from the word go.

Categories: DBA Blogs

OBIEE 12c – Your Answers After Upgrading

Rittman Mead Consulting - Thu, 2016-03-03 04:00

Several blogs have already been written about new functionality in OBIEE 12c. Mark Rittman, for example, posted a good one here.

Now, I’ve personally had the chance to play with it for a few weeks, mostly in Answers and some with the RPD, and wanted to share my experience. With a sleek interface and many new functionalities, 12c brings some very useful features that users will appreciate. As with most new software releases, I expected to find issues that needed to be worked out. In general, I was pleasantly surprised with the UI, the speed, and the intuitiveness that came along with OBIEE 12c.

Here, I’ll share with you some of the new features within Answers:

Percent Calculation

If you’ve created lots of percent variance columns, it’s probably second nature that you will create your formula and then multiply by 100. In 12c, you can create your percent calculation without multiplying it by 100, then set your % data formatting in the Column Properties. In the same spot where you specify how the data is displayed, you can check the x100 box, which in turn will automatically multiply your results from that column by 100. Pretty sleek solution to simplify your formulas.

Percent Calc

Saved Columns

This feature is very well described here, so I will give a high level overview: 12c gives you a very easy way to save a complex formula into the catalog. If you’ve built a lot of logic in a column’s formula, and would like to reuse the logic in future reports, you will appreciate the opportunity of saving columns. I remember creating many financial calculations that had to be reused often, and until now there was no easy way to retrieve the column formulas. Trying to simplify my life, I ended up inventing “my own method” of saving complex calculations by saving different analyses that I named as “Master – Calculation” containing the columns that I reused often. I would start many reports based on these Master reports because they had my pre-built formulas; however, this was not a clean method for others to follow. OBIEE 12c gives you this clean and simple method for storing and reusing your most wanted columns. You do this by entering your formula in edit formula and choosing to “Save Column as” for future use.

Calculated Columns

OBIEE 12c provides a more intuitive way to create calculated columns than previous versions. In 10g or 11g, you needed to add a “whatever” column to the query, and then go in Edit Formula to define the calculation for your new column. While this worked, most new users often wondered why they were “bringing in two revenue columns,” for example. In 12c, you can add only the needed columns to your Criteria, then go straight to Results. In the Results tab, there is a New Calculated Measure icon that brings you immediately to the Edit Formula screen where you can name your new measure and define its formula.

Calc Columns

Measure Abbreviation

There is also a more intuitive abbreviation of the measures that are placed on a graph. In 11g, when you dragged an amount to an axis, you may recall that the numbers would show up exactly as the raw number. So, if your result was 12,000,000, then that was exactly what you would see on the graph to begin. If you wanted to improve your graph, then you needed to go to the Graph Properties and format the data from the axis to be abbreviated into, for our example above, millions (or 12M). To save you a step, 12c will automatically abbreviate your graph data in the most user-friendly way. So, if the data is 12,000,000, you automatically get 12M!

Measure Abbreviation.png

Heat Matrix

Easy to use heat matrix!—I mean it: easy. While in 11g, you would have to be somewhat visually savvy and spend a lot of time conditionally formatting. OBIEE 12c gives you a tool that allows you to create a meaningful heat matrix in a matter of minutes—wait—even seconds. All you need is to know the two dimensions and one measure that you would like to use, and drag and drop them. Choose from an array of color schemas and how you would like to use the colors. In no time, your heat matrix is ready.

Heat Map

Treemaps

A new member of the OBIEE family is here to provide a visual solution for very complex activities. The Treemap provides a hierarchical structure that allows you to quickly spot patterns and outliers. At first, it may require a bit of head twisting to look at a graph like this, but remember, this is indeed a graph for complex activities. One of the most ideal usages for this new feature is the grouping by parent/children groups and the displaying of how two measures fair up inside each group.

Treemap.png

Advanced Analytics

OBIEE 12c gives you the capability of working with statistical and R functions right from the ‘Edit formula’ pane. While I have found that this new feature was still not very user friendly, it’s a lot easier than making this functionality work in 11g. For example, to create a simple Trendline with 11g, the developer had to slowly build each step of a calculation to find the slope of a line, and then find the Y intercept. With these answers in hand, the results had to be carefully placed on a graph, so that it could render meaningful results. If you require statistical graphs within OBIEE, 12c may be a great fit for you. For example, below is a graph showing four different Trendlines:

Trendline

The Criteria for building these four lines would be very intense in 11g; but in OBIEE 12c, it contained only five columns: one for the Calendar Year, and one for each Trendline. The Trendlines were created one at a time, by inserting the new “Analytics” Function in the column’s formula (see below).

AA Combo

Data Mashup

This is a dream come true to many of us, though it requires an optional data visualization license. With this new functionality, you are able to use OBIEE along with any excel spreadsheet (XSA) saved on your machine.

You can add a spreadsheet to OBIEE from two areas:

  1. When you are creating an analysis (in the Criteria tab, and then choosing to add data source as shown below), or

Add Data Source

2. By going to the Visual Analyzer Home Page.

As this blog focuses on Answers, I will review the first option here.

There are three possible ways of analyzing a spreadsheet in Answers. You either want to:

  1. Analyze the spreadsheet by itself, or
  2. Use attributes from the spreadsheet along with fact data from your enterprise system, or
  3. Use fact data from your spreadsheet along with attributes and facts from your enterprise system.

For options 2 and 3 to work properly, it is important that your joins are properly matched (watch your cardinalities!) from your spreadsheet to your enterprise data. Also, as usual, option 3 will only work along with another fact table when the two tables are joined to a conformed dimension. Cardinalities and conformed dimensions are items that we generally take for granted when working on front-end OBIEE, because these points have been carefully handled during RPD modeling. Since the spreadsheet modeling has to be done in the front end, special caution must be used when modeling them in order to avoid “exploded” results, or simply inaccurate results.

Word of caution on placing an XSA sourced analysis in a shared folder:

Once you create an analysis using a spreadsheet and save it to a shared folder, you will receive this message:

Warning

Once you choose “YES,” the spreadsheet will show as a new subject area—for you and for anyone who has access to the folder in which you placed the analysis, meaning that the catalog security just TOOK CONTROL of your spreadsheet! Below is a screenshot of how they show as new subject areas:

Subject Areas

So, if your intent was to share an analysis from a XSA, but not necessarily share the entire spreadsheet to be reused, you may want to restrict your analysis to a folder with the specific securities that you would like to apply to your spreadsheet. BUT…think carefully before saving the analysis in a shared folder. If you realize that you made a mistake, just know that deleting your analysis from the incorrect folder will NOT remove your spreadsheet as an available subject area for other users. Remember, the catalog security took control of your spreadsheet, and it’s not going to let it go! If you saved the analysis in a folder with incorrect permissions, you must delete the spreadsheet altogether from the tool, reload it, and then save the analysis in the correct folder (with the permissions that you want).

You will likely need in-depth information regarding mashup security once you are really working with it. Check out this Oracle doc for more info.

Word of caution when archiving an analysis containing a spreadsheet, or when moving that analysis between environments:

The username of the owner of the analysis gets embedded in the column formula, and so does the precise name that you gave your spreadsheet when you first loaded it. So, let’s say that you are transitioning environments and the new environment does not contain your spreadsheets. If someone else has an archived catalog containing one of your mashup queries, they will get an error when retrieving results for your query, because the tool doesn’t have your spreadsheet loaded yet. The only way for them to unarchive your analysis and retrieve results is for YOUR USER to log into OBIEE, load the original spreadsheet (saving it with the same exact name as before), and then saving the analysis in the proper shared folder once again.

Deleting the New Subject Area

One tricky thing in this new tool: even if you uploaded your spreadsheet (XSA) during an analysis in OBIEE, it can only be deleted from the “New Home Page,” which is the Home Page of Visual Analyzer. You can get to the “New Home Page” from the “Old Home Page”:

New Home Page

Once in the New Home Page, click on Data Sources. Choose your Data Source and delete it!

Delete SA

I confess that I had some trouble finding the delete button. Maybe I would have bumped into it had I played more with VA, but that was not the case. Regardless, I felt relieved that this button existed somewhere!

Data Mashup Performance

This was a bit of an issue, but mostly when combined with the Advanced Analytics functions. From my research and from talking to colleagues, I found that the following must be observed to optimize performance:

  1. Reduce the size of your spreadsheet, when possible.
  2. DB indexing on the field that you are joining.
  3. Proper cardinality on your mashup joins with your DB data.
  4. Set up caching for mashups on bi server.

Overall, the experience in OBIEE 12c Answers was very positive, and the new features could bring a great deal of time savings for any organization!

To learn more about all that OBIEE 12c has to offer, check out our upcoming bootcamps here.

Hope to see you then!

The post OBIEE 12c – Your Answers After Upgrading appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Alliance 2016

Jim Marion - Thu, 2016-03-03 02:54

HEUG Alliance is next week. I hope you are registered. I know this will be a fun and informative conference (as always). I am scheduled for two sessions at Alliance:

  • PeopleSoft Developer: Tips and Techniques on Monday at 3:30 PM in room 6B/C and
  • PeopleSoft Meet the Experts on Tuesday at 10:15 AM in room 307/308 table 2.

When I'm not in sessions, you will find me in the demo grounds. Stop by and say "Hello!"

APEX 5 - Opening and Closing Modal Window - Part Two

Denes Kubicek - Thu, 2016-03-03 02:34
In this blog post from 2015 I explained how to open a modal window from an item and pass the values back, display a sucess message, etc. This page also contains an item plugin to make it easier to do the whole stuff and avoid hardcoding. I have never published this plugin. Hopefully this functionality is going to be a standard part of 5.1. Now, I was asked to make this working with a file browse item as well. I extended that example and created a new one showing how to do that with changing and clicking on a file browse item. You can see this example here. If you have an account for my workspace, you can even download the plugin and make it working in your environment. Enjoy.

Categories: Development

Links for 2016-03-02 [del.icio.us]

Categories: DBA Blogs

Using SQL*Plus Instant Client 11.2 on OS X El Capitan

Christopher Jones - Wed, 2016-03-02 23:43

The installation steps for using SQL*Plus 11.2 (Instant Client) and other OCI-based applications have necessarily changed since OS X El Capitan broke DYLD_LIBRARY_PATH. The updated instructions are given on the Instant Client Downloads for Mac OS X (Intel x86) page.

Update: Instant Client 12.1 is out and installation is easier. Check out the above link for instructions

Here are the 11.2 instructions repeated. The steps also work on earlier versions of OS X:

  1. Download the desired Instant Client ZIP files from OTN. All installations require the Basic or Basic Lite package.

  2. Open a terminal and unzip the packages into a single directory such as "~/instantclient_11_2". For example, to use SQL*Plus:

    cd ~
    unzip instantclient-basic-macos.x64-11.2.0.4.0.zip
    unzip instantclient-sqlplus-macos.x64-11.2.0.4.0.zip
      
  3. Create the appropriate libclntsh.dylib link for the version of Instant Client. For example:

    cd ~/instantclient_11_2
    ln -s libclntsh.dylib.11.1 libclntsh.dylib
    

    Note: OCCI programs will additionally need:

    ln -s libocci.dylib.11.1 libocci.dylib
    
  4. Add links to "~/lib" for required Basic package libraries. For example, to use OCI programs (including SQL*Plus, Python's cx_Oracle, PHP's OCI8, Node.js's node-oracledb, and Ruby's ruby-oci8 driver):

    mkdir ~/lib
    ln -s ~/instantclient_11_2/libclntsh.dylib.11.1 ~/lib/
    ln -s ~/instantclient_11_2/{libnnz11.dylib,libociei.dylib} ~/lib/
    
  5. To run SQL*Plus, add its libraries to "~/lib", and update PATH. For example:

    ln -s ~/instantclient_11_2/{libsqlplus.dylib,libsqlplusic.dylib} ~/lib/
    export PATH=~/instantclient_11_2:$PATH
        

Oracle Database 12c : Multitenant Architecture : Container or Pluggable Database CDB/PDB

Online Apps DBA - Wed, 2016-03-02 20:41

  With the launch of database 12c in 2013, Oracle introduced a new architectural concept called Multi-Tenancy, where you have a Container Database (CDB) and Pluggable Database (PDB). To explain I included video from Tom Kyte & Randy Urbano where Tom discuss about Pluggable Databases and challenges in Database Consolidation. Randy explains Architecture of PDB/CDB and […]

The post Oracle Database 12c : Multitenant Architecture : Container or Pluggable Database CDB/PDB appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

SQL Injection with MySQL SLEEP()

Pythian Group - Wed, 2016-03-02 11:40

Recently we’ve received an alert from one of our clients that running threads are high on one of their servers. Once we logged in, we noticed that all the selects were waiting for table level read lock. We scrolled through the process list, and found the selects which were causing the problems. After killing it, everything went back to normal.
At first we couldn’t understand why the query took so long, as it looked like all the others. Then we noticed, that one of the WHERE clauses was strange. There, we found a SLEEP(3) attached with OR to the query. Obviously, this server was the victim of a SQL injection attack.

What is SQL injection?

I think most of us know what SQL injection is, but as a refresher, SQL injection is when someone provides malicious input into WHERE, to run their own statements as well.
Typically this occurs when you ask a user for input, like username, but instead of a real name they give you a MySQL statement that will be run by your server without you knowing it.
Exploits of a Mom
Based on the picture, let’s see a few examples.
We have a simple table:

mysql> describe post;
+-------+------------------+------+-----+---------+----------------+
| Field | Type             | Null | Key | Default | Extra          |
+-------+------------------+------+-----+---------+----------------+
| id    | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| test  | varchar(127)     | YES  |     | NULL    |                |
+-------+------------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)

mysql> select * from post;
+----+--------+
| id | test   |
+----+--------+
|  1 | text1  |
|  2 | text2  |
|  3 | text3  |
|  4 | text4  |
|  5 | text5  |
|  6 | text6  |
|  7 | text7  |
|  8 | text8  |
|  9 | text9  |
| 10 | text10 |
+----+--------+
10 rows in set (0.00 sec)

Lets run a select with LIKE, which we know for sure won’t have a match:

mysql> select * from post where test like '%nomatch%';
Empty set (0.00 sec)

But what, happens if we don’t filter the inputs and someone wants to get all the data?
mysql> select * from post where test like '%nomatch ' || '1==1' && '1%';
+----+--------+
| id | test   |
+----+--------+
|  1 | text1  |
|  2 | text2  |
|  3 | text3  |
|  4 | text4  |
|  5 | text5  |
|  6 | text6  |
|  7 | text7  |
|  8 | text8  |
|  9 | text9  |
| 10 | text10 |
+----+--------+
10 rows in set, 2 warnings (0.00 sec)

That was a very mild injection, but it could be much more malicious: we could drop another table!

mysql> show tables;
+----------------------+
| Tables_in_injecttest |
+----------------------+
| game                 |
| post                 |
+----------------------+
2 rows in set (0.01 sec)

mysql> select * from post where test like '%nomatch'; drop table game;-- %';
Empty set (0.00 sec)

Query OK, 0 rows affected (0.28 sec)

mysql> show tables;
+-----------------------+
| Tables_in_inject_test |
+-----------------------+
| post                  |
+-----------------------+
1 row in set (0.00 sec)

mysql>

If we don’t know the name of the table, we can still cause trouble by blocking access to the database
If we insert SLEEP() in the WHERE part, then it will be executed for every matching row… if we inject it like: “OR SLEEP(n)”, it will be executed to every row in the table!
Okay, this will be “just” a long running select. It shouldn’t cause much trouble thanks to InnoDB and transaction isolation, unless something needs a table lock.

Some common examples of what causes table locks are:

  • explicit lock table
  • insert/update/delete on MyISAM
  • ALTER table on InnoDB

Once statements start waiting for lock on the table, all proceeding selects will wait for the previous locking statement to finish

Terminal 1:
mysql> select * from post where test like '%nomatch ' OR sleep(300) AND '1%';
….
Terminal 2:
mysql> alter table post engine=innodb;
...
Terminal 3:
mysql> select SQL_NO_CACHE count(*) from post;
…
Processlist:
+----------+----------------------+-----------+--------------------+---------+-------+---------------------------------+-----------------------------------------------------------------------+
| Id       | User                 | Host      | db                 | Command | Time  | State                           | Info                                                                  |
+----------+----------------------+-----------+--------------------+---------+-------+---------------------------------+-----------------------------------------------------------------------+
| 17170817 | root                 | localhost | janitest           | Query   |    19 | User sleep                      | select * from post where test like '%nomatch ' OR sleep(300) AND '1%' |
| 17170918 | root                 | localhost | janitest           | Query   |    11 | Waiting for table metadata lock | alter table post engine=innodb                                        |
| 17170957 | root                 | localhost | janitest           | Query   |     4 | Waiting for table metadata lock | select * from post                                                    |
+----------+----------------------+-----------+--------------------+---------+-------+---------------------------------+-----------------------------------------------------------------------+
3 rows in set (0.00 sec)

As we see in the example, ALTER table will wait until it can get a lock on post table, and this blocks every other select from now on to the table.
Or, if you are using MyISAM table, a simple update/insert will block access to the table, because it needs table level lock during them.

How can we defend ourselves from SQL injection?

There are several ways to secure yourself from SQL injection.

  • First of all, validate the input. If you expect only letters and numbers, filter it with regexp for example, to make sure there are no special characters there. Also escape the inputs on application side; programming languages have built-in function to do that (eg.: mysql_real_escape_string() in PHP)
  • Use prepared statement! It won’t allow 2 clause if you specified only 1. When you use prepared statements, the variables are transmitted as MySQL variables. Even if the string is not escaped, it will end up in one variable, and MySQL treats is as a longer string.
    (For more details see: http://dev.mysql.com/doc/refman/5.7/en/sql-syntax-prepared-statements.html )
  • Use a tool like MySQL Enterprise Firewall, which is a plugin for MySQL and can filter your statements to make sure there are no things like: || 1==1

I would like to start a little talk about this, so if you encountered SQL injection before, would you share it with us, how they did it, or in general how do you prevent SQL injections in your application?

 

Categories: DBA Blogs

prvf-0002 : could not retrieve local node name

Learn DB Concepts with me... - Wed, 2016-03-02 09:43

prvf-0002 : could not retrieve local node name



PRVF-0002 : could not retrieve local node name
check if the hostname is correct in sysconfig/network & /etc/hosts files:
[oracle@Linux03 ~]$ cat /etc/sysconfig/network | grep HOSTNAME
HOSTNAME=Linux03

[oracle@Linux03 ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

[oracle@Linux03 ~]$ vi /etc/hosts

[oracle@Linux03 ~]$ su root
Password:
[root@Linux03 oracle]# vi /etc/hosts

[root@Linux03 oracle]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.215.xxx.xx Linux03   <<<--- added hostname and ip


An alternate solution is to add the hostname in bash_profile file in home dir:
[oracle@Linux03 ~]$ cat .bash_profile | grep HOSTNAME
export ORACLE_HOSTNAME=Linux03
Categories: DBA Blogs

The Most Important Responsibility of a Remote DBA

Chris Foot - Wed, 2016-03-02 08:00

As the modern database continues to evolve and take on a more strategic role in business, the complexities associated with managing these environments grows as well. For database administrators, this changing landscape forces them to continuously adapt and grow alongside the database engine to properly design, support, and secure an enterprise’s critical data stores. Their in-depth knowledge of the infrastructures so crucial to operations make DBAs an integral part of not only day-to-day functions, but business decisions aimed at reducing operation costs, improving margins, and more.

Oracle Infrastructure Cloud Partner Briefings

Oracle Infrastructure Cloud  Webcasts - Partner Briefings ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

OUG Ireland 2016

Tim Hall - Wed, 2016-03-02 02:24

oug-ireland-2016

Early tomorrow morning I will start my journey to Ireland for the OUG Ireland 2016 conference.

I’ve got back-to-back sessions in the afternoon on database consolidation and analytic functions. I was invited to a panel on the Friday also, but I had already booked my flights before that invite, so sadly I had to decline.

This year is going to be another day trip for me, with the flight costing a gigantic £27 on ChavAir. :)

See you there!

Cheers

Tim…

OUG Ireland 2016 was first posted on March 2, 2016 at 9:24 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Free Webinar on Oracle Database Monitoring with Zabbix

Gerger Consulting - Wed, 2016-03-02 01:04
You are kindly invited to attend the free webinar hosted by Oracle ACE and Oracle Certified Master Ronald Rood: Oracle Monitoring with Zabbix. Register at this link.

Enterprise IT is moving to the Cloud. With tens, hundreds even thousands of servers in the Cloud, monitoring the uptime, performance and quality of the Cloud infrastructure becomes a challenge that traditional monitoring tools struggle to solve. Enter Zabbix. Zabbix is a low footprint, low impact, open source monitoring tool that provides various notification types and integrates easily with your ticketing system.
During the webinar, we'll cover the following topics:

  • Installation and configuration of Zabbix in the Cloud
  • Monitoring Oracle databases using Zabbix
  • How to use Zabbix templates to increase the quality and efficiency of your monitoring setup
  • How to setup Zabbix for large and remote networks
  • How to trigger events in Zabbix
  • Graphing with Zabbix
Categories: Development

Log Buffer #463: A Carnival of the Vanities for DBAs

Pythian Group - Tue, 2016-03-01 15:27

As the winter in the Northern hemisphere is giving way to spring, slowly but surely, blog posts are blooming in the gardens of Oracle, SQL Server and MySQL. This Log Buffer plucks some of them for your reading pleasure.

Oracle:

Providing A Persistent Data Volume to EMC XtremIO Using ClusterHQ Flocker, Docker And Marathon

There is sliced bread in SQL.

Oracle Cloud – Your service is suspended due to exceeding resource quota !

EM12c Compliance ‘Required Data Available’ flag – Understanding and Troubleshooting

How can I see my invisible columns

SQL Server:

Auto Generate Your Database Documentation

A Lightweight, Self-adjusting, Baseline-less Data Monitor

Keeping POST and GET Separated

How often should I run DBCC CHECKDB?

Disabling SQL Server Optimizer Rules with QUERYRULEOFF

MySQL:

MySQL Contributions status

Planets9s: Building scalable database infrastructures with MariaDB & HAProxy

High availability with asynchronous replication… and transparent R/W split

mysql_real_connect is not thread safe

Now available in swanhart-tools: NATIVE asynchronous query execution for any MySQL client!

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator