Skip navigation.

Feed aggregator

Autonomous Quadcopters Playing Some Catch

Oracle AppsLab - Wed, 2014-09-17 16:04

Tony went to a talk by Salim Ismail (@salimismail), the Founding Executive Director of Singularity University recently. He may/may not post his thoughts on the talk, which sounds fascinating, but this video is worth sharing either way, and not just because we have quadcopter fever.

Yeah, that’s autonomous flight, So refer to the list of horrifying things that should not be allowed.Possibly Related Posts:

Filler or Curated Content?

Oracle AppsLab - Wed, 2014-09-17 15:30

I consider these types of posts to be filler, but I suppose you could look at it as curated content or something highbrow like that. Take your pick.

10 Horrifying Technologies That Should Never Be Allowed

I scanned this post first, thought it would be interesting and left it to read later. Then I read it, and now, I’m terrified. Here’s the list, make sure to hit the link and read all about the sci-fi horrors that aren’t really sci-fi anymore.

  • Weaponized Nanotechnology
  • Conscious Machines
  • Artificial Superintelligence
  • Time Travel
  • Mind Reading Devices
  • Brain Hacking Devices
  • Autonomous Robots Designed to Kill Humans
  • Weaponized Pathogens
  • Virtual Prisons and Punishment
  • Hell Engineering

xkcd on watches

This is exactly how I feel about watches.

This is Phil Fish

I only know who Phil Fish is because I watched Indie Game: The Movie. This short documentary by Ian Danskin is quite good and is newsworthy this week thanks to Marcus Persson’s reference to it in his post about why he’s leaving Mojang (h/t Laurie for sharing), the makers of Minecraft, after Microsoft completes its acquisition of the company.

I have often wondered why so many people hate Nickelback, and now I have a much better understanding of why, thanks to Ian. Embedded here for your viewing pleasure.

https://www.youtube.com/watch?v=PmTUW-owa2wPossibly Related Posts:

No Write Permission on ACFS Mount Point

Sabdar Syed - Wed, 2014-09-17 15:24


Last night, I managed to create the ACFS Mount Point after resolving the issue "ACFS-9459: ASVM/ACFS is not supported on this os version". But, after creating the ACFS Mount Point, I'm unable to create or touch any files under this ACFS Mount Point.

Though, I tried to touch a file to create files under this ACFS Mount Point using OS User Oracle and Root, it's failing with the following error:

"touch: cannot touch `x': Permission denied"

Here are the steps I tried and got the error:

The ACFS Mount Point "/oracle/prd"  has been created on Linux 6.5 server using Oracle ASMCA tool, and this "/oracle/prd" mount point has 775 permission.

As Oracle User:

[oracle@Linux01 ~]# df -m|grep -i asm
/dev/asm/oracle_prd-77   35840    148     35693   1% /oracle/prd
[oracle@Linux01 ~]# cd /oracle/prd
[oracle@Linux01 prd]# pwd
/oracle/prd
[oracle@Linux01 prd]# ls -ld /oracle/prd
drwxrwxr-x. 4 oracle dba 4096 Sep 15 19:29 /oracle/prd
[oracle@Linux01 prd]# ls
lost+found
[oracle@Linux01 prd]# touch abc
touch: cannot touch `abc': Permission denied
[oracle@Linux01 prd]#

As Root user:

[root@Linux01 ~]# df -m|grep -i asm
/dev/asm/oracle_prd-77   35840    148     35693   1% /oracle/prd
[root@Linux01 ~]# cd /oracle/prd
[root@Linux01 prd]# pwd
/oracle/prd
[root@Linux01 prd]# ls -ld /oracle/prd
drwxrwxr-x. 4 oracle dba 4096 Sep 15 19:29 /oracle/prd
[root@Linux01 prd]# ls
lost+found
[root@Linux01 prd]# touch abc
touch: cannot touch `abc': Permission denied
[root@Linux01 prd]#

The problem was the SELinux is enabled on the Linux System.

To check, if SELinux is enable/disable on the system, cat the file "/etc/selinux/config"
Note: Used the root login to do the following steps:

[root@Linux01]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Or use the sestatus command to check the status.

[root@geprdb850 prd]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   enforcing
Mode from config file:          disabled
Policy version:                 28
Policy from config file:        targeted

Here is how to disable SElinux:

Method 1- Edit "/etc/selinux/config" and set the SELINUX variable to 'disabled'
Method 2- Use the setenforce command to disable on-the-fly

If you go with Method 1, then your changes are permanent but only effective if you reboot the machine.

If you go with Method 2, then your changes are NOT permanent but effective immediately.

Method 1: (Permanent Change)

Take the backup of "/etc/selinux/config" file.

[root@Linux01]# cp /etc/selinux/config /etc/selinux/config.bkp

Then edit "/etc/selinux/config" the file and set the SELinux variable to 'disabled'

[root@Linux01]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Then reboot the server!!

Method 2: (On-the-fly)

[root@Linux01]# getenforce
Enforcing

[root@Linux01]# setenforce
usage:  setenforce [ Enforcing | Permissive | 1 | 0 ]

[root@Linux01 prd]# setenforce 0

[root@Linux01 prd]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   permissive
Mode from config file:          disabled
Policy version:                 28
Policy from config file:        targeted

[root@Linux01]# getenforce
Disabled

After the SELinux is disabled, then creating the files under ACFS Mount Point is succeeded.

Note: The above commands have to be completed by root user and do this under system admin supervision.

Regards,
Sabdar Syed.

http://sabdarsyed.blogspot.com

Oracle EMEA Customer Support Services Excellence Award 2014

The Oracle Instructor - Wed, 2014-09-17 13:54

The corporation announced today that I got the Customer Services Excellence Award 2014 in the category ‘Customer Champion’ for the EMEA region. It is an honor to be listed there together with these excellent professionals that I proudly call colleagues.

CSS Excellence Award 2014


Categories: DBA Blogs

Business Transformation: Getting from Here to There

Chris Warticki - Wed, 2014-09-17 12:00

New Oracle OpenWorld Sessions

Nearly every industry is undergoing some type of transformation. Businesses strive to innovate, gain market share, and stay ahead of the competition.

Six new Oracle OpenWorld conference sessions--specifically designed for organizations embarking on major transformation initiatives such as Cloud, Big Data and Analytics and Engineered Systems--will help answer how do we get from here to there.

During Oracle OpenWorld, Thought Leaders from Oracle Consulting will share insights, best-in-class methodologies, and critical lessons learned from helping customers transform their business with new solutions built on Oracle technology.

Do you need to hear first-hand how customers are successfully moving to the Cloud (HCM, ERP and CX) in weeks with a proven, practical approach that creates value, drives down cost, and reduces risk? Are you trying to maximize what you’re getting from your current analytic solution? Would you like to enhance your customer experience as well as improve the productivity of employees with mobility? Is a private cloud initiative is on your short-list? If so, you won’t want to miss new conference sessions dedicated to these leading transformational themes.

Learn More.


So many choices! 149 Oracle ACE-led sessions at Oracle OpenWorld

OTN TechBlog - Wed, 2014-09-17 10:54

Have you been searching the Oracle OpenWorld content catalog trying to decide which sessions to add to your schedule? We suggest you spend some time looking over the Oracle ACE session Focus On Document that lists all 149 sessions that will be presented by an Oracle ACE, ACE Associate or ACE Director.

Participants of the Oracle ACE Program are recognized for frequently sharing their technical insight, knowledge, and real world experience with the Oracle Community. We hope this handy list saves you some time preparing your session schedule and gives you some more time to think about what kind of cool t-shirt you are going to design while you're hanging out in the OTN Lounge. ;-)

Using the ILOM for Troubleshooting on ODA

Pythian Group - Wed, 2014-09-17 09:25

I worked on root cause analysis for a strange node reboot on client’s Oracle Database Appliance yesterday. The case was quite interesting from the perspective that none of the logs contained any information related to the cause of the reboot. I could only see the log entries for normal activities and then – BOOM! – the start-up sequence! It looked like someone just power cycled the node. I also observed the heartbeat timeouts followed by the node eviction on the remaining node. There was still one place I hadn’t checked and it revealed the cause of the issue.

One of the cool things about ODA is it’s service processor (SP) called Integrated Lights Out Manager (ILOM), which allows you to do many things that you’d normally do being physically located in the data center – power cycle the node, change the BIOS settings, choose boot devices, and … (the drum-roll) … see the console outputs from the server node! And it doesn’t only show the current console output but it keeps logging it too. Each ODA server has its own ILOM, so I found out the IP address for the ILOM of the node which failed and connected to it using SSH.

$ ssh pythian@oda01a-mgmt
Password:

Oracle(R) Integrated Lights Out Manager

Version 3.0.14.13.a r70764

Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.

->
-> ls

 /
    Targets:
        HOST
        STORAGE
        SYS
        SP

    Properties:

    Commands:
        cd
        show

ILOM can be browsed as it would be a directory structure. Here the “Targets” are different components of the system. When you “cd” into a target you see sub-components and so on. Each target can have properties, they are displayed as variable=value pairs under “Properties” section. And there are also list of “Commands” that you can execute for the current target. the “ls” command shows the sub-targets, the properties and the commands for the current target. Here’s how I found the console outputs from the failed node:

-> cd HOST
/HOST

-> ls

 /HOST
    Targets:
        console
        diag

    Properties:
        boot_device = default
        generate_host_nmi = (Cannot show property)

    Commands:
        cd
        set
        show

-> cd console
/HOST/console

-> ls

 /HOST/console
    Targets:
        history

    Properties:
        line_count = 0
        pause_count = 0
        start_from = end

    Commands:
        cd
        show
        start
        stop

-> cd history
/HOST/console/history

-> ls

The last “ls” command started printing all the history of console outputs on my screen and look what I found just before the startup sequence (I removed some lines to make this shorter and I also highlighted the most interesting lines):

divide error: 0000 [#1] SMP
last sysfs file: /sys/devices/pci0000:00/0000:00:09.0/0000:1f:00.0/host7/port-7:1/expander-7:1/port-7:1:2/end_device-7:1:2/target7:0:15/7:0:15:0/timeout
CPU 3
Modules linked in: iptable_filter(U) ip_tables(U) x_tables(U) oracleacfs(P)(U) oracleadvm(P)(U) oracleoks(P)(U) mptctl(U) mptbase(U) autofs4(U) hidp(U) l2cap(U) bluetooth(U) rfkill(U) nfs(U) fscache(U) nfs_acl(U) auth_rpcgss(U) lockd(U) sunrpc(U) bonding(U) be2iscsi(U) ib_iser(U) rdma_cm(U) ib_cm(U) iw_cm(U) ib_sa(U) ib_mad(U) ib_core(U) ib_addr(U) iscsi_tcp(U) bnx2i(U) cnic(U) uio(U) dm_round_robin(U) ipv6(U) cxgb3i(U) libcxgbi(U) cxgb3(U) mdio(U) libiscsi_tcp(U) libiscsi(U) scsi_transport_iscsi(U) video(U
) output(U) sbs(U) sbshc(U) parport_pc(U) lp(U) parport(U) ipmi_si(U) ipmi_devintf(U) ipmi_msghandler(U) igb(U) ixgbe(U) joydev(U) ses(U) enclosure(U) e1000e(U) snd_seq_dummy(U) snd_seq_oss(U) snd_seq_midi_event(U) snd_seq(U) snd_seq_device(U) snd_pcm_oss(U) snd_mixer_oss(U) snd_pcm(U) snd_timer(U) snd(U) soundcore(U) snd_page_alloc(U) iTCO_wdt(U) iTCO_vendor_support(U) shpchp(U) i2c_i801(U) i2c_core(U) ioatdma(U) dca(U) pcspkr(U) dm_multipath(U) usb_storage(U) mpt2sas(U) scsi_transport_sas(U) raid_class(U)
 ahci(U) raid1(U) [last unloaded: microcode]
Pid: 29478, comm: top Tainted: P        W  2.6.32-300.11.1.el5uek #1 SUN FIRE X4370 M2 SERVER
RIP: 0010:[<ffffffff8104b3e8>]  [<ffffffff8104b3e8>] thread_group_times+0x5b/0xab
...
Kernel panic - not syncing: Fatal exception
Pid: 29478, comm: top Tainted: P      D W  2.6.32-300.11.1.el5uek #1
Call Trace:
 [<ffffffff8105797e>] panic+0xa5/0x162
 [<ffffffff8107ae09>] ? up+0x39/0x3e
 [<ffffffff810580d1>] ? release_console_sem+0x194/0x19d
 [<ffffffff8105839a>] ? console_unblank+0x6a/0x6f
 [<ffffffff8105764b>] ? print_oops_end_marker+0x23/0x25
 [<ffffffff81456ea6>] oops_end+0xb7/0xc7
 [<ffffffff8101565d>] die+0x5a/0x63
 [<ffffffff8145677c>] do_trap+0x115/0x124
 [<ffffffff81013674>] do_divide_error+0x96/0x9f
 [<ffffffff8104b3e8>] ? thread_group_times+0x5b/0xab
 [<ffffffff810dd2f8>] ? get_page_from_freelist+0x4be/0x65e
 [<ffffffff81012b1b>] divide_error+0x1b/0x20
 [<ffffffff8104b3e8>] ? thread_group_times+0x5b/0xab
 [<ffffffff8104b3d4>] ? thread_group_times+0x47/0xab
 [<ffffffff8116ee13>] ? collect_sigign_sigcatch+0x46/0x5e
 [<ffffffff8116f366>] do_task_stat+0x354/0x8c3
 [<ffffffff81238267>] ? put_dec+0xcf/0xd2
 [<ffffffff81238396>] ? number+0x12c/0x244
 [<ffffffff8107419b>] ? get_pid_task+0xe/0x19
 [<ffffffff811eac34>] ? security_task_to_inode+0x16/0x18
 [<ffffffff8116a77b>] ? task_lock+0x15/0x17
 [<ffffffff8116add1>] ? task_dumpable+0x29/0x3c
 [<ffffffff8116c1c6>] ? pid_revalidate+0x80/0x99
 [<ffffffff81135992>] ? seq_open+0x25/0xba
 [<ffffffff81135a08>] ? seq_open+0x9b/0xba
 [<ffffffff8116d147>] ? proc_single_show+0x0/0x7a
 [<ffffffff81135b2e>] ? single_open+0x8f/0xb8
 [<ffffffff8116aa0e>] ? proc_single_open+0x23/0x3b
 [<ffffffff81127cc1>] ? do_filp_open+0x4f8/0x92d
 [<ffffffff8116f8e9>] proc_tgid_stat+0x14/0x16
 [<ffffffff8116d1a6>] proc_single_show+0x5f/0x7a
 [<ffffffff81135e73>] seq_read+0x193/0x350
 [<ffffffff811ea88c>] ? security_file_permission+0x16/0x18
 [<ffffffff8111a797>] vfs_read+0xad/0x107
 [<ffffffff8111b24b>] sys_read+0x4c/0x70
 [<ffffffff81011db2>] system_call_fastpath+0x16/0x1b
Rebooting in 60 seconds..???

A quick search on My Oracle Support quickly found a match: Kernel Panic at “thread_group_times+0x5b/0xab” (Doc ID 1620097.1)”. The call stack and the massages are a 100% match and the root cause is a kernel bug that’s fixed in more recent versions.
I’m not sure how I would have gotten to the root cause if this system was not an ODA and the server would have just bounced without logging the Kernel Panic in any of the logs. ODA’s ILOM definitely made the troubleshooting effort less painful and probably saved us from couple more incidents caused by this bug in the future as we’d been able to troubleshoot it quicklyand we’ll be able to implement the fix sooner.

Categories: DBA Blogs

Opening Up the LMS Walled Garden

Michael Feldstein - Wed, 2014-09-17 08:45

In yesterday’s post I described where I (and many others) see the LMS market heading in terms of interoperability.

At the same time, the LMS does a very poor job at providing a lot of the learning technologies desired by faculty and students. There is no way that a monolithic LMS can keep up with the market – it cannot match functionality of open internet tools especially without adding feature bloat.

I would add that part of the cause of the “false binary position” that D’Arcy points out is that much of the public commentary focuses on where the LMS has been rather than where it is going. There is a significant movement based on interoperability that is leading, perhaps painfully and slowly, to a world where the LMS can coexist with open educational tools, with even end users (faculty and students) eventually having the ability to select their tools that can share rosters and data with the institutional LMS.

Coexistence and interoperability, however, should not imply merely having links from the LMS to external tools as is too often the case.

The Walled Garden

The LMS (which George Station rightly points out was really called the Course Management System in the early years) started out as a walled garden with basic functionality of syllabus sharing, announcements, gradebook, email, and a few other tools.

walledgarden

Over time, as both Jared Stein points out in his blog post:

Flash forward to 2005(ish), when “Web 2.0” was on many educators’ minds as a new wave of services that made it easier for anyone to express themselves to anyone who was interested in participating. New web services and social media made the legacy LMS look like what it was: A slow-moving cruise ship that locked passengers in their cabins. It didn’t care about user experience. It didn’t care about integrating with social media. It didn’t care about encouraging novel practices or experimentation. But those were really just symptoms; the sickness was that the LMS vendors didn’t care about what was happening in our culture and in our communities as connectivity and multimedia exploded through the open web.

The LMS vendors did not just ignore these new services, however, but they tried to eat their cake and have it, too, by creating poor imitations of the external tools and stuffing them inside the LMS.

walledgarden2

As Web 2.0 tools proliferated, this approach of maintaining the walled garden was one of the primary causes of feature bloat and poorly-designed learning tools within the LMS.

walledgarden3

False Binary – A Choice

This situation – a walled garden LMS with feature bloat and inelegant tools while multiplying external tools become available – represents the bad side of the ed tech market as it has existed. Despite the weakness of this design approach, the vendors themselves were not the only ones at fault. As Mike Caulfield points out in his description of the “elegant and extensible Prometheus:

A number of years later I asked a person I knew who worked at Prometheus why Prometheus failed. Did Blackboard crush them?

His answer was interesting. No, it wasn’t Blackboard at all. It was the educational institutions. With the slow, resource-intensive and state-mandated RFP processes, the interminable faculty commitees, and the way that even after the deal was signed the institution would delay payment and implementation as long as possible (or suddenly throw it into an unanticipated ‘final review’) it was just not possible to grow a stable business. The process institutions followed was supposed to ensure equitable access to contracts, but what it did was made it impossible for any company not sitting on a pile of cash to stay in business. (I’m extrapolating a bit here, but not much).

I would add that the RFP process also encourages a feature checklist mentality, elevating the importance of being able to say “we have that feature” and minimizing the ability to say “this design doesn’t suck”.

Many institutions have reacted slowly to the proliferation of tools and officially support only the enterprise LMS – often due to FERPA / student privacy concerns but also due to perceived inability of central units to provide support to faculty and students on multiple tools.

But this is a choice, even in the current market with limited interoperability. There are other institutions that support not only the official enterprise LMS but also multiple learning tools. While institutions have a responsibility to provide baseline LMS services for faculty, there is a strong argument that they also have a responsibility to support the innovators and early adopters that want to explore with different learning tools, whether or not they integrate with the LMS within a course.

Moving Beyond the Wall

But can the market progress such that the enterprise LMS can coexist with open tools even at the course level? The answer in my mind is yes, and the work to move in this direction has been in progress for years. Thanks to LTI specification, and in the future the Caliper interoperability framework, the vision that George Kroner describes is getting closer and closer.

But the LMSs today won’t be the LMSs of tomorrow. Rather than being a “dumping ground” for content, maybe one possible future for LMSs is as Learning Management Scaffolding – metaphorically supporting learning no matter its shape or form – with content being viewed and activities taking place inside and outside of the LMS. Maybe content will be seamlessly navigable around the LMS and the web – and perhaps in other types of systems like LCMSs – Learning Content Management Systems. Maybe learning tools of all types and sizes – but external to the LMS – will support every long-tail instructional desire imaginable while assessment results feed back into the LMS gradebook. Maybe the LMS will be the storage mechanism for leaning analytics as well, but it is more likely that it will become only one source of data feeding into another system better-suited for the task. But try as I might I fail to imagine a future in which some centrally-managed, instructor-accessible system stores rosters and grades, enforces privacy and security policies, and provides some form of starting-off point for students.

In this developing future market, coexistence of LMS and Open will include not just links or grudging institutional support, but it will also include information sharing of rosters, data, and context. Open tools that will start with the class roster in place, data of user activity shared between apps, and the ability to external apps to be run in the context of the course design and recent class activities.

walledgarden5

There will be painful implementations – caused both by LMS vendors and by institutions – that will prevent a smooth transition to this breakdown of the walled garden, but it will become increasingly difficult for LMS solutions to survive over time if they don’t adapt. There will also be market niches (e.g. specific online programs) that will retain the walled garden LMS approach, but in general the markets should change.

I personally see the realistic future as having more of a choice of tools rather than a minimal LMS. LMS vendors will continue to have reasons to develop (or acquire) their own internal tools, and there will even be cases where the tight integration and focused development will lead to better tools in the LMS than outside. The key change will be the ability for integration decisions – which tools to use in specific classes or in specific institutions – to be made closer to the faculty and student end users. From LMS vendor to central IT to academic program to even individual faculty – moving closer to those who know the specific needs of the class. Central IT and the institution will remain important in setting policies and permissions to protect student privacy and provide guidance to faculty and course designers who are more conservative in their ed tech usage. But either way (minimal LMS or swappable tool LMS), I think the long-term trend is moving in this direction of LMS and Open tool coexistence.

Update 9/19: Updated graphics to add LMS label, CC license and logo to facilitate sharing outside of blog.

The post Opening Up the LMS Walled Garden appeared first on e-Literate.

Ignoring outliers in aggregate function

Tony Andrews - Wed, 2014-09-17 08:22
This is another aide-memoire for myself really.  I want to calculate the average load times per page for an application from timings stored in the database, and see which pages need attention. However, the stats can be skewed by the odd exceptional load that takes much longer than a typical load for reasons that are probably irrelevant to me. Here is a fictitious example: create table timings (Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2014/09/ignoring-outliers-in-aggregate-function.html

OOW - Focus On Support and Services for PeopleSoft

Chris Warticki - Wed, 2014-09-17 08:00
Focus On Support and Services for PeopleSoft   Monday, Sep 29, 2014

Conference Sessions

Integrating PeopleSoft for Seamless IT Service Delivery: Tips from UCF
Robert Yanckello, CTO, UCF
Sastry Vempati, Director ACS Customer Service Management, Oracle
2:45 PM - 3:30 PM Moscone West - 2024 CON2541 Wednesday, Oct 01, 2014

Conference Sessions

Leading Practices in a PeopleSoft 9.2 Upgrade
Bryan Moore, OMES
Michael Widell, Deputy Business Segment Director, OMES
Rick Humphress, Project Manager, Oracle
11:30 AM - 12:15 PM Intercontinental - Intercontinental B CON2871 Thursday, Oct 02, 2014

Conference Sessions

PeopleSoft: Support in the Age of Update Manager
Ganesan Sankaran, Principal Support Engineer, Oracle
12:00 PM - 12:45 PM Moscone West - 2020 CON8325 Is Your Organization Trying to Focus on an HCM Cloud Strategy?
Rich Isola, Sr. Practice Director, Oracle
Heather Mcaninch, Senior Client Executive HCM, Oracle
Ken Thompson, Sr. Practice Director, Oracle
1:15 PM - 2:00 PM Palace - Grand Ballroom CON7574 Thursday, Oct 02, 2014

Conference Sessions

Parallel Upgrade of PeopleSoft Applications and Oracle Database: Tips from MetLife
Gopi Kotha, Software Systems Specialist, MetLife
Asha Santosh, Lead PeopleSoft DBA, Metropolitan Life Insurance Company (inc)
Navin Lobo, Principal Advanced Support Engineer, Oracle
1:15 PM - 2:00 PM Moscone West - 2020 CON6106   My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

Oracle Java Compute Cloud Service Now Available!

Today Oracle added exciting new services to our existing Public Cloud offerings. First Things First It all begins with Oracle Compute Cloud service. It offers Elastic Compute Capacity, where...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Getting The Users’ Trust – Part 1

Rittman Mead Consulting - Wed, 2014-09-17 03:02

Looking back over some of my truly ancient Rittman Mead blogs (so old in fact that they came with me when I joined the company soon after Rittman Mead was launched), I see recurrent themes on why people “do” BI and what makes for successful implementations. After all, why would an organisation wish to invest serious money in a project if it does not give value either in terms of cost reduction or increasing profitability through smart decisions. This requires technology to provide answers and a workforce that is both able to use this technology and has faith that the answers returned allow them to do their jobs better. Giving users this trust in the BI platform generally boils down to resolving these three issues: ease of use of the reporting tool, quickness of data return and “accuracy” or validity of the response. These last two issues are a fundamental part of my work here at Rittman Mead and underpin all that I do in terms of BI architecture, performance, and data quality. Even today as we adapt our BI systems to include Big Data and Advanced Analytics I follow the same sound approaches to ensure usable, reliable data and the ability to analyse it in a reasonable time.

Storage is cheap so don’t aggregate away your knowledge. If my raw data feed is sales by item by store by customer by day and I only store it in my data warehouse as sales by month by state I can’t go back to do any analysis on my customers, my stores, my products. Remember that the UNGROUP BY only existed in my April Fools’ post. Where you choose to store your ‘unaggregated’ data may well be different these days; Hadoop and schema on read paradigms often being a sensible approach. Mark Rittman has been looking at architectures where both the traditional DWH and Big Data happily co-exist.

When improving performance I tend to avoid tuning specific queries, instead I aim to make frequent access patterns work well. Tuning individual queries is almost always not a sustainable approach in BI; this week’s hot, ‘we need the answer immediately’ query may have no business focus next week. Indexes that we create to make a specific query fly may have no positive effect on other queries; indeed, indexes may degrade other aspects of BI performance such as increased data load times and have subtle effects such as changing a query plan cost so that groups of materialized views are no longer candidates in query re-write (this is especially true when you use nested views and the base view is no longer accessed).

My favoured performance improvement techniques are: correctly placing the data be it clustering, partitioning, compressing, table pinning, in-memory or whatever, and making sure that the query optimiser knows all about the nature of the data; again and again “right” optimiser information is key to good performance. Right is not just about running DBMS_STATS.gather_XXX over tables or schemas every now and then; it is also about telling the optimiser about data relationships between data items. Constraints describe the data, for example which columns allow NULL values, which columns are part of parent-child relationships (foreign keys). Extended table statistics can help describe relationships between columns in a single table for example in a product dimensions table the product sub-category and the product category columns will have an interdependence, without that knowledge cardinality estimates can be very wrong and favour nested loop style plans that could be very poor performing on large data sets.

Sometimes we will need to create aggregates to answer queries quickly; I tend to build ‘generic’ aggregates, those that can be used by many queries. Often I find that although data is loaded frequently, even near-real-time, many business users wish to look at larger time windows such as week, month, or quarter; In practice I see little need for day level aggregates over the whole data warehouse timespan, however, there will always be specific cases that might require day-level summaries. If I build summary tables or use Materialized Views I would aim to make tables that are at least 80% smaller than the base table and to avoid aggregates that partially roll up many dimensional hierarchies; customer category by product category by store region by month would probably not be the ideal aggregate for most real-user queries. That said Oracle does allow us to use fancy grouping semantics in the building of aggregates (grouping sets, group by rollup and group by cube.) The in-database Oracle OLAP cube functionality is still alive and well (and was given a performance boost in Oracle 12c); it may be more appropriate to aggregate in a cube (or relational-look-alike) rather than individual summaries.

Getting the wrong results quickly is no good, we must be sure that the results we display are correct. As professional developers we test to prove that we are not losing or gaining data through incorrect joins and filters, but ETL coding is often the smallest factor in “incorrect results” and this brings me to part 2, Data Quality.

Categories: BI & Warehousing

About index range scans, disk re-reads and how your new car can go 600 miles per hour!

Tanel Poder - Wed, 2014-09-17 02:56

Despite the title, this is actually a technical post about Oracle, disk I/O and Exadata & Oracle In-Memory Database Option performance. Read on :)

If a car dealer tells you that this fancy new car on display goes 10 times (or 100 or 1000) faster than any of your previous ones, then either the salesman is lying or this new car is doing something radically different from all the old ones. You don’t just get orders of magnitude performance improvements by making small changes.

Perhaps the car bends space around it instead of moving – or perhaps it has a jet engine built on it (like the one below :-) :

Anyway, this blog entry is a prelude to my upcoming Oracle In-Memory Database Option series and here I’ll explain one of the radical differences between the old way of thinking and modern (In-Memory / Smart Scan) thinking that allow such performance improvements.

To set the scope and and clarify what I mean by the “old way of thinking”: I am talking about reporting, analytics and batch workloads here – and the decades old mantra “if you want more speed, use more indexes”.

I’m actually not going to talk about the In-Memory DB option here – but I am going to walk you through the performance numbers of one index range scan. It’s a deliberately simple and synthetic example executed on my laptop, but it should be enough to demonstrate one important point.

Let’s say we have a report that requires me to visit 20% of rows in an orders table and I’m using an index range scan to retrieve these rows (let’s not discuss whether that’s wise or not just yet). First, I’ll give you some background information about the table and index involved.

My test server’s buffer cache is currently about 650 MB:

SQL> show sga

Total System Global Area 2147483648 bytes
Fixed Size                  2926472 bytes
Variable Size             369100920 bytes
Database Buffers          687865856 bytes
Redo Buffers               13848576 bytes
In-Memory Area           1073741824 bytes

The table I am accessing is a bit less than 800 MB in size, about 100k blocks:

SQL> @seg soe.orders

    SEG_MB OWNER  SEGMENT_NAME   SEGMENT_TYPE    BLOCKS 
---------- ------ -------------  ------------- -------- 
       793 SOE    ORDERS         TABLE           101504 

I have removed some irrelevant output from the output below, I will be using the ORD_WAREHOUSE_IX index for my demo:

SQL> @ind soe.orders
Display indexes where table or index name matches %soe.orders%...

TABLE_OWNER  TABLE_NAME  INDEX_NAME         POS# COLUMN_NAME     DSC
------------ ----------- ------------------ ---- --------------- ----
SOE          ORDERS      ORDER_PK              1 ORDER_ID
                         ORD_WAREHOUSE_IX      1 WAREHOUSE_ID
                                               2 ORDER_STATUS

INDEX_OWNER  TABLE_NAME  INDEX_NAME        IDXTYPE    UNIQ STATUS   PART TEMP  H  LFBLKS       NDK   NUM_ROWS      CLUF LAST_ANALYZED     DEGREE VISIBILIT
------------ ----------- ----------------- ---------- ---- -------- ---- ---- -- ------- --------- ---------- --------- ----------------- ------ ---------
SOE          ORDERS      ORDER_PK          NORMAL/REV YES  VALID    NO   N     3   15801   7148950    7148950   7148948 20140913 16:17:29 16     VISIBLE
             ORDERS      ORD_WAREHOUSE_IX  NORMAL     NO   VALID    NO   N     3   17860      8685    7148950   7082149 20140913 16:18:03 16     VISIBLE

I am going to do an index range scan on the WAREHOUSE_ID column:

SQL> @descxx soe.orders

Col# Column Name                    Null?      Type                      NUM_DISTINCT        Density  NUM_NULLS HISTOGRAM       NUM_BUCKETS Low Value                        High Value
---- ------------------------------ ---------- ------------------------- ------------ -------------- ---------- --------------- ----------- -------------------------------- --------------------------------
   1 ORDER_ID                       NOT NULL   NUMBER(12,0)                   7148950   .00000013988          0                           1 1                                7148950
...
   9 WAREHOUSE_ID                              NUMBER(6,0)                        999   .00100100100          0                           1 1                                999
...

Also, I enabled SQL trace and event 10298 – “ORA-10298: ksfd i/o tracing”, more about that later:

SQL> ALTER SESSION SET EVENTS '10298 trace name context forever, level 1';

Session altered.

SQL> EXEC SYS.DBMS_MONITOR.SESSION_TRACE_ENABLE(waits=>TRUE);

PL/SQL procedure successfully completed.

SQL> SET AUTOTRACE ON STAT

Ok, now we are ready to run the query! (It’s slightly formatted):

SQL> SELECT /*+ MONITOR INDEX(o, o(warehouse_id)) */ 
         SUM(order_total) 
     FROM 
         soe.orders o 
     WHERE 
         warehouse_id BETWEEN 400 AND 599;

Let’s check the basic autotrace figures:

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
    1423335  consistent gets
     351950  physical reads
          0  redo size
        347  bytes sent via SQL*Net to client
        357  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

What?! We have done 351950 physical reads?! This is 351950 blocks read via physical read operations. This is about 2.7 GB worth of IOs done just for this query! Our entire table size was under 800MB and the index size under 150MB. Shouldn’t indexes allow us to visit less blocks than the table size?!

Let’s dig deeper – by breaking down this IO number by execution plan line (using a SQL Monitoring report in this case):

Global Stats
================================================================
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
================================================================
|      48 |      25 |       23 |     1 |     1M | 352K |   3GB |
================================================================

SQL Plan Monitoring Details (Plan Hash Value=16715356)
=============================================================================================================================================
| Id |               Operation                |       Name       | Execs |   Rows   | Read | Read  | Activity |       Activity Detail       |
|    |                                        |                  |       | (Actual) | Reqs | Bytes |   (%)    |         (# samples)         |
=============================================================================================================================================
|  0 | SELECT STATEMENT                       |                  |     1 |        1 |      |       |          |                             |
|  1 |   SORT AGGREGATE                       |                  |     1 |        1 |      |       |          |                             |
|  2 |    TABLE ACCESS BY INDEX ROWID BATCHED | ORDERS           |     1 |       1M | 348K |   3GB |    96.30 | Cpu (1)                     |
|    |                                        |                  |       |          |      |       |          | db file parallel read (25)  |
|  3 |     INDEX RANGE SCAN                   | ORD_WAREHOUSE_IX |     1 |       1M | 3600 |  28MB |     3.70 | db file sequential read (1) |
=============================================================================================================================================

So, most of these IOs come from accessing the table (after fetching relevant ROWIDs from the index). 96% of response time of this query was also spent in that table access line. We have done about ~348 000 IO requests for fetching blocks from this table. This is over 3x more blocks than the entire table size! So we must be re-reading some blocks from disk again and again for some reason.

Let’s confirm if we are having re-reads. This is why I enabled the SQL trace and event 10298. I can just post-process the tracefile and see if IO operations with the same file# and block# combination do show up.

However, using just SQL trace isn’t enough because multiblock read wait events don’t show all blocks read (you’d have to infer this from the starting block# and count#), the “db file parallel read” doesn’t show any block#/file# info at all in SQL Trace (as this “vector read” wait event encompasses multiple different block reads under a single wait event).

The classic single block read has the file#/block# info:

WAIT #139789045903344: nam='db file sequential read' ela= 448 file#=2 block#=1182073 blocks=1 obj#=93732 tim=156953721029

The parallel read wait events don’t have individual file#/block# info (just total number of files/blocks involved):

WAIT #139789045903344: nam='db file parallel read' ela= 7558 files=1 blocks=127 requests=127 obj#=93696 tim=156953729450

Anyway, because we had plenty of db file parallel read waits that don’t show all the detail in SQL Trace, I also enabled the event 10298 that gives us following details below (only tiny excerpt below):

...
ksfd_osdrqfil:fob=0xce726160 bufp=0xbd2be000 blkno=1119019 nbyt=8192 flags=0x4
ksfdbio:rq=0x7f232c4edb00 fob=0xce726160 aiopend=126
ksfd_osdrqfil:fob=0xce726160 bufp=0x9e61a000 blkno=1120039 nbyt=8192 flags=0x4
ksfdbio:rq=0x7f232c4edd80 fob=0xce726160 aiopend=127
ksfdwtio:count=127 aioflags=0x500 timeout=2147483647 posted=(nil)
...
ksfdchkio:ksfdrq=0x7f232c4edb00 completed=1
ksfdchkio:ksfdrq=0x7f232c4edd80 completed=0
WAIT #139789045903344: nam='db file parallel read' ela= 6872 files=1 blocks=127 requests=127 obj#=93696 tim=156953739197

So, on Oracle 12.1.0.2 on Linux x86_64 with xfs filesystem with async IO enabled and filesystemio_options = SETALL we get the “ksfd_osdrqfil” trace entries to show us the block# Oracle read from a datafile. It doesn’t show the file# itself, but it shows the accessed file state object address (FOB) in SGA and as it was always the same in the tracefile, I know duplicate block numbers listed in trace would be for the same datafile (and not for a block with the same block# in some other datafile). And the tablespace I used for my test had a single datafile anyway.

Anyway, I wrote a simple script to summarize whether there were any disk re-reads in this tracefile (of a select statement):

$ grep ^ksfd_osdrqfil LIN121_ora_11406.trc | awk '{ print $3 }' | sort | uniq -c | sort -nr | head -20
     10 blkno=348827
     10 blkno=317708
      9 blkno=90493
      9 blkno=90476
      9 blkno=85171
      9 blkno=82023
      9 blkno=81014
      9 blkno=80954
      9 blkno=74703
      9 blkno=65222
      9 blkno=63899
      9 blkno=62977
      9 blkno=62488
      9 blkno=59663
      9 blkno=557215
      9 blkno=556581
      9 blkno=555412
      9 blkno=555357
      9 blkno=554070
      9 blkno=551593
...

Indeed! The “worst” blocks have been read in 10 times – all that for a single query execution.

I only showed 20 top blocks here, but even when I used “head -10000″ and “head -50000″ above, I still saw blocks that had been read in to buffer cache 8 and 4 times respectively.

Looking into earlier autotrace metrics, my simple index range scan query did read in over 3x more blocks than the total table and index size combined (~350k blocks read while the table had only 100k blocks)! Some blocks have gotten kicked out from buffer cache and have been re-read back into cache later, multiple times.

Hmm, let’s think further: We are accessing only about 20% of a 800 MB table + 150 MB index, so the “working set” of datablocks used by my query should be well less than my 650 MB buffer cache, right? And as I am the only user in this database, everything should nicely fit and stay in buffer cache, right?

Actually, both of the arguments above are flawed:

  1. Accessing 20% of rows in a table doesn’t automatically mean that we need to visit only 20% blocks of that table! Maybe all of the tables’s blocks contain a few of the rows this index range scan needs? So we might need to visit all of that table’s blocks (or most of them) and extract only a few matching rows from each block. But nevertheless, the “working set” of required blocks for this query would include almost all of the table blocks, not only 20%. We must read all of them in at some point in the range scan.So, the matching rows in table blocks are not tightly packed and physically in correspondence with the index range scan’s table access driving order, but are potentially “randomly” scattered all over the table.This means that an index range scan may come back and access some data block again and again to get a yet-another row from it when the ROWID entries in index leaf blocks point there. This is what I call buffer re-visits(Now scroll back up and see what is that index’es clustering factor :-)

  2. So what, all the buffer re-visits should be really fast as the previously read block is going to be in buffer cache, right?Well, not really. Especially when the working set of blocks read is bigger than buffer cache. But even if it is smaller, the Oracle buffer cache isn’t managed using basic LRU replacement logic (since 8.1.6). New blocks that get read in to buffer cache will be put into the middle of the “LRU” list and they work their way up to the “hot” end only if they are touched enough times before someone manages to flush them out. So even if you are a single user of the buffer cache, there’s a chance that some just recently read blocks get aged out from buffer cache – by the same query still running – before they get hot enough. And this means that your next buffer re-visit may turn into a disk block re-read that we saw in the tracefiles.If you combine this with the reality of production systems where there’s a thousand more users trying to do what you’re doing, at the same time, it becomes clear that you’ll be able to use only a small portion of the total buffer cache for your needs. This is why people sometimes configure KEEP pools – not that the KEEP pool is somehow able to keep more blocks in memory for longer per GB of RAM, but simply for segregating the less important troublemakers from more important… troublemakers :)

 

So what’s my point here – in the context of this blog post’s title?

Let’s start from Exadata – over the last years it has given many customers order(s) of magnitude better analytics, reporting and batch performance compared to their old systems, if done right of course. In other words, instead of indexing even more, performing wide index range scans with millions of random block reads and re-reads, they ditched many indexes and started doing full table scans. Full table scans do not have such “scaling problems” like a wide index range scan (or a “wide” nested loop join driving access to another table). In addition you got all the cool stuff that goes really well with full scans – multiblock reads, deep prefetching, partition-wise hash joins, partition pruning and of course all the throughput and Smart Scan magic on Exadata).

An untuned complex SQL on a complex schema with lots of non-ideal indexes may end up causing a lot of “waste IO” (don’t have a better term) and similarly CPU usage too. And often it’s not simple to actually fix the query – as it may end up needing a significant schema adjustment/redesign that would require also changing the application code in many different places (ain’t gonna happen). With defaulting reporting to full table scans, you can actually eliminate a lot of such waste, assuming that you have a high-througput – and ideally smart – IO subsystem. (Yes, there are always exceptions and special cases).

We had a customer who had a reporting job that ran almost 2000x faster after moving to Exadata (from 17 hours to 30 seconds or something like that). Their first reaction was: “It didn’t run!” Indeed it did run and it ran correctly. Such radical improvement came from the fact that the new system – compared to the old system – was doing multiple things radically better. It wasn’t just an incremental tweak of adding a hint or a yet another index without daring to do more significant changes.

In this post I demoed just one of the problems that’s plaguing many of the old-school Oracle DW and reporting systems. While favoring full table scanning had always been counterintuitive for most Oracle shops out there, it was the Exadata’s hardware, software and also the geek-excitement surrounding it, what allowed customers to take the leap and switch from the old mindset to new. I expect the same from the Oracle In-Memory Database Option. More about this in a following post.

 

Related Posts

Lecture : Oracle Magazine Septembre / Octobre 2014

Jean-Philippe Pinte - Wed, 2014-09-17 00:06
L'Oracle Magazine  de Septembre / Octobre 2014 est disponible.

Heart Walk at Oracle HQ

David Haimes - Tue, 2014-09-16 23:19

Tomorrow Oracle is hosting a Bay Area Heart Walk at it’s HQ campus and it should be a huge event, not only raising valuable funds but also raising awareness of the importance of a heart healthy lifestyle.  Since the beginning of this year I have been doing regular walking meetings and have enjoyed the benefits to both my health and my productivity.  I also found many other people already doing walking meetings and others who have been inspired to start them.  I’ve really enjoyed the overwhelmingly positive responses and encouragement.

Highlights of the event include the Semi Official Oracle house band, Scope Creep, check out this YouTube clip, you may recognize some of the faces including my boss on drums, who I can confirm does not appreciate my joke – what do you call somebody with no talent who hangs with musicians? – answers in the comment section please.

I will be walking with ‘Team Erb’ led by the energetic and enthusiastic (except when it comes to twitter) Janine Erb, who is one of the singers in the YouTube clip above.  We have close to 100 people in the team and I’m looking forward to walking with and talking to people I have known and worked with for a long time and also making some new friends.  If you are there tomorrow, come and say hi.  I’ll take some pictures and probably tweet during the event.

Finally, if you want to donate you can follow this link, but probably more importantly spend some time educating yourself, the American Heart Association is one place to start.


Categories: APPS Blogs

Behind the Screen with Oracle Support

Joshua Solomin - Tue, 2014-09-16 22:43

Untitled Document

GPIcon

Get beyond the support interface screen to meet the experts from Oracle Support at the Oracle Support Stars Bar. Have a tough question about supporting or upgrading your Oracle products? Looking for best practices for problem prevention, rapid resolution, and product upgrades? Stop by the Stars Bar and speak directly with an expert who can help.

While you’re there, check out one of our 10-minute briefing sessions on the hottest support topics. Here are just a few of the high-impact briefings you can see at this year’s Stars Bar:

  • Proactive Support Best Practices
  • Oracle Platinum Services
  • My Oracle Support Tips & Tricks
  • And many more!

The Support Stars Bar is open Monday, Tuesday and Wednesday in the Moscone West Exhibition Hall (Booths 3461 and 3908). More details here.

Visit the Services and Support Oracle OpenWorld Website to discover how you can take advantage of all Oracle OpenWorld has to offer. See you there!

See you there!

ps and top differences with HugePages

Bobby Durrett's DBA Blog - Tue, 2014-09-16 18:09

The Unix utilities ps and top report memory differently with HugePages than without.

Without HugePages ps -eF seems to include the SGA memory under the SZ column:

UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
... 
oracle    1822     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d000_orcl
oracle    1824     1  0 846155 16228  0 07:19 ?        00:00:00 ora_d001_orcl
oracle    1826     1  0 846155 16236  0 07:19 ?        00:00:00 ora_d002_orcl
oracle    1828     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d003_orcl
oracle    1830     1  0 846155 16224  0 07:19 ?        00:00:00 ora_d004_orcl
oracle    1832     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d005_orcl
oracle    1834     1  0 846155 16236  0 07:19 ?        00:00:00 ora_d006_orcl
oracle    1836     1  0 846155 16228  0 07:19 ?        00:00:00 ora_d007_orcl
oracle    1838     1  0 846155 16224  0 07:19 ?        00:00:00 ora_d008_orcl
oracle    1840     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d009_orcl
oracle    1842     1  0 846155 16240  0 07:19 ?        00:00:00 ora_d010_orcl
oracle    1844     1  0 846155 16228  0 07:19 ?        00:00:00 ora_d011_orcl
...

Here SZ = 846155 kilobytes = 826 megabytes.  If you add up all the SZ values it comes to 81 gigabytes which wont fit in my 4 gig memory and 4 gig swap.  It seems to include the amount of the SGA actually used, not the full 3 gigabyte max sga size, otherwise the total would have been hundreds of gigabytes.

Doing the same exercise with 3 gigabytes of huge pages ps looks like this:

UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
...
oracle    1809     1  0 59211 15552   0 07:52 ?        00:00:00 ora_d000_orcl
oracle    1811     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d001_orcl
oracle    1813     1  0 59211 15548   0 07:52 ?        00:00:00 ora_d002_orcl
oracle    1815     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d003_orcl
oracle    1817     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d004_orcl
oracle    1819     1  0 59211 15548   0 07:52 ?        00:00:00 ora_d005_orcl
oracle    1821     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d006_orcl
oracle    1823     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d007_orcl
oracle    1825     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d008_orcl
oracle    1827     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d009_orcl
oracle    1829     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d010_orcl
oracle    1831     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d011_orcl
...

SZ = 59211 k= 57 meg.  Total SZ = 5.89 gigabytes.  Still this is bigger than total memory but closer to the 4 gig memory available.  It’s just a guess, but I’m pretty sure that with HugePages this total doesn’t include the amount of memory in use in the SGA in the SZ for each process as it did without HugePages.

The other weird thing is how different top looks with HugePages.  Here is top with the database having just come up without HugePages:

top - 07:20:16 up 3 min,  2 users,  load average: 1.06, 0.33, 0.13
Tasks: 187 total,   1 running, 186 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.6%us,  6.3%sy,  0.0%ni, 77.8%id, 14.2%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   4050836k total,   984444k used,  3066392k free,    14460k buffers
Swap:  4095996k total,        0k used,  4095996k free,   450128k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 2010 oracle    20   0 3310m  51m  44m D  7.6  1.3   0:00.21 oracle             
 1988 oracle    20   0 3307m  50m  45m D  3.8  1.3   0:00.21 oracle             
 1794 oracle    -2   0 3303m  15m  13m S  1.9  0.4   0:01.07 oracle

Notice that we have about 3 gigabytes free – 3066392k and nothing in swap.

Here is the same system  with 3 gig of HugePages:

top - 07:53:21 up 2 min,  2 users,  load average: 0.81, 0.29, 0.11
Tasks: 179 total,   1 running, 178 sleeping,   0 stopped,   0 zombie
Cpu(s):  2.0%us,  8.6%sy,  0.0%ni, 69.2%id, 20.1%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   4050836k total,  3860100k used,   190736k free,    14332k buffers
Swap:  4095996k total,        0k used,  4095996k free,   239104k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 1781 oracle    -2   0 3303m  15m  13m S  3.5  0.4   0:01.02 oracle             
    1 root      20   0 19400 1520 1220 S  0.0  0.0   0:01.43 init               
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd

Now only 190736k is free.  But, note that in both cases top lists the oracle processes with 3300 meg of virtual memory, which is consistent with the 3 gig max sga.

I’ve still got a lot to learn about HugePages but I thought I would pass along these observations.  This article on Oracle’s support site helped me learn about HugePages:

HugePages on Oracle Linux 64-bit (Doc ID 361468.1)

I ended up sizing the HugePages down to 2 gig on my 4 gig test system and reducing sga max size to 2 gig as well.  My system was sluggish with so little free memory when I was using a 3 gig SGA and HugePages.  It was much snappier with only 2 gig tied up in HugePages and dedicated to the SGA, leaving 2 gig for everything else.

This was all done with Oracle’s 64 bit version of Linux and 11.2.0.3 database.

– Bobby

 






Categories: DBA Blogs

ACFS-9459: ASVM/ACFS is not supported on this os version

Sabdar Syed - Tue, 2014-09-16 14:50

After installing the Grid Infrastructure (GI) home on two Node RAC 11gR2 (11.2.0.4) on Linux 6.5 servers, I tried to create the ACFS filesystem using ASMCA tool to have Oracle (RDBMS) Home. But, the option tabs "Volumes" and "ASM Cluster File Systems" in ASMCA toll were disabled, due to this we were unable to create volume and cluster filesystem and got the following error:

"ACFS-9459: ASVM/ACFS is not supported on this os version: '3.8.13-16.2.1.el6uek.x86_64'"

I had checked the "ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)" under the section "ACFS 11.2.0.4 Supported Platforms" in Oracle Metalink.For our evnironment, Oracle Linux - Unbreakable Enterprise Kernel, it's defined as a bug and it suggested to apply the opatch "16318126".

After download the suggested Opatch and while installing the opatch, the following errors were encountered:

"The opatch minimum version  check for patch /16318126/custom failed  for
The opatch minimum version  check for patch /16318126/etc failed  for
The opatch minimum version  check for patch /16318126/files failed  for
Opatch version check failed for oracle home  
Opatch version  check failed
update the opatch version for the failed homes and retry"

Initially, I thought that the OPatch version (11.2.0.3.4) is an older version, but the existing OPatch version (11.2.0.3.4) is greater than the required version (11.2.0.3.0) as per the opatch readme.txt file.

Even though, I have downloaded the latest OPatch "Patch 6880880: OPatch patch of version 11.2.0.3.6 for Oracle software releases 11.2.0.x" for Linux x86-64. One good thing was, I could generate the OCM response file (ocm.rsp) using "emocmrsp" file under $GI_HOME/OPatch/ocm/bin. Because, after installing GI_HOME for RAC, there was no "emocmrsp" file under $GI_HOME/OPatch/ocm/bin. The OCM response file (ocm.rsp) was needed to apply the patch in Auto mode.

Well, I got the same error again even after downloading latest OPatch and while applying the patch "16318126":

"Opatch version check failed for oracle home  "

The commands used to apply the OPatch are as follows:

$ cd /
i.e. cd /u01/oracle/patch/16318126

Note: This is where the patch 16318126, recommended in ""ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)", was unzipped.

Login as root and executed the below opatch:

# opatch auto /16318126 -oh -ocmrf /OPatch/ocm/bin/ocm.rsp

# opatch auto /u01/oracle/patch/16318126 -oh -ocmrf /OPatch/ocm/bin/ocm.rsp

This is wrong !! and this is why, the error "Opatch version  check failed" was encountered.

The correct wahy of applying the patch is as follows:

Login as root.

Set the opatch path in the $PATH as follows:

# export PATH=$PATH:$ORACLE_HOME/OPatch

# opatch auto -oh -ocmrf /OPatch/ocm/bin/ocm.rsp

i.e.

# opatch auto /u01/oracle/patch -oh -ocmrf /OPatch/ocm/bin/ocm.rsp

Note:
No need to mention the patch number directory i.e "/u01/oracle/patch/16318126", just metion only the directory upto "/u01/oracle/patch/".

And, Make sure under "/u01/oracle/patch" there won't be any other patch directores or files apart from the patch you need to apply i.e. "16318126"

Then the patch was succeded as follows:

===========================
# opatch auto /u01/oracle/patch -oh /u01/oracle/app/11.2.0.4/grid -ocmrf /u01/oracle/app/11.2.0.4/grid/OPatch/ocm/bin/ocm.rsp

Executing /u01/oracle/app/11.2.0.4/grid/perl/bin/perl /u01/oracle/app/11.2.0.4/grid/OPatch/crs/patch11203.pl -patchdir /u01/oracle -patchn patch -oh

/u01/oracle/app/11.2.0.4/grid -ocmrf

/u01/oracle/app/11.2.0.4/grid/OPatch/ocm/bin/ocm.rsp -paramfile /u01/oracle/app/11.2.0.4/grid/crs/install/crsconfig_params

This is the main log file: /u01/oracle/app/11.2.0.4/grid/cfgtoollogs/opatchauto2014-09-15_17-41-14.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/oracle/app/11.2.0.4/grid/cfgtoollogs/opatchauto2014-09-15_17-41-14.report.log

2014-09-15 17:41:14: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/oracle/app/11.2.0.4/grid/crs/install/crsconfig_params

Stopping CRS...
Stopped CRS successfully

patch /u01/oracle/patch/16318126  apply successful for home  /u01/oracle/app/11.2.0.4/grid

Starting CRS...
Installing Trace File Analyzer
CRS-4123: Oracle High Availability Services has been started.

opatch auto succeeded.

===========================

Then the same procedure to apply the patch on another node has been repeated. After this, the option tabs "Volumes" and "ASM Cluster File Systems" in ASMCA tool are enabled and the main problem of creating the volume and asm cluster filesystem are solved.

Note: This blog post is specific to one our testing environments, and need not to the same with you. So, please go through the following Oracle Metalink note for your environment.

"ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)"

Regards,
Sabdar Syed.
http://sabdarsyed.blogspot.com/

Handling Format for BigDecimal Numbers in ADF BC

Andrejus Baranovski - Tue, 2014-09-16 13:12
This may not be as straightforward as it sounds - to define a format for a number attribute in ADF BC. Especially if you are going to have large number (more than 15 digits). Most likely you are going to experience precision/scale and rounding issues, for BigDecimal and Number type attributes with format mask applied. Sounds frustrating? Yes it is. I hope my blog post will help you to implement proper number formatting.

Firstly I'm going to demonstrate, how it works by default. I'm going to use format mask for 18 digits and 2 decimals. Format mask for a number attribute of BigDecimal type would fail with invalid precision/scale error - this is surprising, especially knowing that attribute is defined with (20, 2) precision and scale:


There is one more issue to handle - automatic rounding for large numbers (more than 15 digits). In this example I'm trying to enter a number with 18 digits:


Suddenly number is rounded, immediately after focus is moving out of the field - this is bad, user can't enter correct large number:


Let's see, how we can fix it. I will explain solution, based on Salary attribute with (20, 2) precision and scale. By default, this attribute is generated with BigDecimal type, I'm going to keep the same type, no need to change to oracle.jbo.domain.Number:


I have set format mask on the VO level, for the same Salary attribute. There is another issue - maximum length of the attribute value user can enter on UI. Maximum length property is reading precision and calculates maximum number of characters, however this is wrong when format mask is applied:


There will be extra comma and dot signs coming from the format, so I would advice to set Display Width to be equal to the attribute precision plus maximum number of commas and dots (26 in my case). Later we would need to change maximumLength property on the UI to use Display Width instead of precision:


Once you set a format mask, formatter class name is assigned for the attribute. Formatter class name is saved in resource bundle, for each formatted attribute separately:


You must change formatter class name to the custom one, don't use DefaultNumberFormatter class - out of the box formatter doesn't work with BigDecimals. Sample application comes with custom Number formatter class. Format method is overridden to support BigDecimal, standard formatter method breaks BigDecimal number by converting it to Double (this is the reason for rounding error) - this is fixed in the sample:


Method to parse is changed as well (this is fixing precision/scale number for the BigDecimal). Instead of calling parse method from super, I'm calling custom method:


I'm setting a property to enable BigDecimal parsing, this allows to work correctly with precision/scale:


In addition, precision and scale are checked by custom method - if there is invalid number of digits, user will be informed:


I was mentioning it earlier - maximumLength property on the UI is updated to point to Display Width set on the VO attribute UI Hints:


We can do a test now, type long number with 18 digits (remember (20, 2) precision/scale for Salary attribute):


Correct format will be applied:


Type decimal part and now there will be no error about precision/scale - it will be successfully accepted:


You can see from the log how format is being applied, it works well with custom formatter class provided in the sample application - ADFFormattingApp.zip:

Mozilla Working to Enhance its Security Process [VIDEO]

Chris Foot - Tue, 2014-09-16 12:44

Transcript

Welcome back to RDX. A proper test environment should be a regular part of your business' Change Management Process. However, if Personally Identifiable Information (PII) is not removed from the test data, sensitive information could be exposed.

According to eWEEK, Mozilla accidentally exposed critical information in two separate incidents. The most recent was first reported August 27, and left 97,000 developers’ information exposed for approximately 3 months. The landfill.bugzilla.org development system exposed information including email and encrypted passwords. Initial disclosure is thought to have occurred during a database migration with a database dump including user data. Users of this system have been advised to change their passwords.

Mozilla is now revising their test plan to not include database dumps. An additional step businesses can take to protect their PII is to use two-factor authentication for access.

Thanks for watching! 

The post Mozilla Working to Enhance its Security Process [VIDEO] appeared first on Remote DBA Experts.