Skip navigation.

Feed aggregator

January 27th: Acorn Paper Products Sales Cloud Reference Forum

Linda Fishman Hoyle - Tue, 2015-01-06 14:42

Join us for an Oracle Sales Cloud Customer Reference Forum on Tuesday, January 27, 2015, with Acorn Paper Products' Jake Weissberg, Director of IT and David Karr, Chief Operating Officer. In this session, Weissberg and Karr will share why Oracle Sales Cloud was the right choice to optimize sales team productivity and effectiveness, while gaining executive visibility to the pipeline. They also will talk about how they were able to streamline their sales-to-order process with Oracle E-Business Suite.

Founded by Jack Bernstein in 1946, Acorn Paper Products Company started by selling job lot (over-run) boxes with five employees in an 11,000-square-foot warehouse. Today, Acorn, which is the end-user distribution arm of parent holding company Oak Paper Products Company, is a fourth-generation, family-owned business with more than 500,000 square feet of warehouse space, operating four specialty product divisions: creative services, janitorial and sanitary products, wine packaging, and agricultural packaging.

You can register now to attend the live Forum on Tuesday, January 27, 2015, at 9:00 a.m. Pacific Time / 12:00 p.m. Eastern Time and learn more from Acorn Paper Products directly.

Count (*)

Jonathan Lewis - Tue, 2015-01-06 12:04

The old chestnut about comparing speeds of count(*), count(1), count(non_null_column) and count(pk_column) has come up in the OTN database forum (at least) twice in the last couple of months. The standard answer is to point out that they will all execute the same code, and that the corroborating evidence for that claim is that, for a long time, the 10053 trace files have had a rubric reporting: CNT – count(col) to count(*) transformation or, for an even longer time, that the error message file (oraus.msg for the English Language version) has had an error code 10122 which produced (from at least Oracle 8i, if not 7.3):


SQL> execute dbms_output.put_line(sqlerrm(-10122))
ORA-10122: Disable transformation of count(col) to count(*)

But the latest repetition of the question prompted me to check whether a more recent version of Oracle had an even more compelling demonstration, and it does. I extracted the following lines from a 10053 trace file generated by 11.2.0.4 (and I know 10gR2 is similar) in response to selecting count(*), count(1) and count({non-null column}) respectively:


Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "TEST_USER"."SAVED_ASH" "SAVED_ASH"

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(1)" FROM "TEST_USER"."SAVED_ASH" "SAVED_ASH"

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(SAMPLE_ID)" FROM "TEST_USER"."SAVED_ASH" "SAVED_ASH"

As you can see, Oracle has transformed all three select lists into count(*), hiding the transformation behind the original column alias. As an outsider’s proof of what’s going on, I don’t think you could get a more positive indicator than that.

 


No Discernible Growth in US Higher Ed Online Learning

Michael Feldstein - Tue, 2015-01-06 11:34

By 2015, 25 million post-secondary students in the United States will be taking classes online. And as that happens, the number of students who take classes exclusively on physical campuses will plummet, from 14.4 million in 2010 to just 4.1 million five years later, according to a new forecast released by market research firm Ambient Insight.

- Campus Technology, 2011

On the positive side, Moody’s notes that the U.S. Department of Education projects a 20-percent growth in master’s degrees and a 9-percent growth in associate degrees, opportunities in both online education and new certificate programs, and a rising earnings premium for those with college degrees.

- Chronicle of Higher Ed, 2014

Q.  How likely would it be that this fraction [% students taking online courses] would grow to become a majority of students over the next five years? A [from institutional academic leaders]. Nearly two-thirds responded that this was “Very likely,” with an additional one-quarter calling it “Likely.” [That’s almost 90% combined]

- Grade Change, Babson Survey 2013

More than two-thirds of instructors (68 percent) say their institutions are planning to expand their online offerings, but they are split on whether or not this is a good idea (36 percent positive, 38 percent negative, 26 percent neutral).

- Inside Higher Ed 2014

Still, the [disruptive innovation] theory predicts that, be it steam or online education, existing consumers will ultimately adopt the disruption, and a host of struggling colleges and universities — the bottom 25 percent of every tier, we predict — will disappear or merge in the next 10 to 15 years.

- Clayton Christensen in NY Times 2013

You could be forgiven for assuming that the continued growth of online education within US higher ed was a foregone conclusion. We all know it’s happening; the questions is how to adapt to the new world.

But what if the assumption is wrong? Based on the official Department of Education / NCES new IPEDS data for Fall 2013 term, for the first time there has been no discernible growth in postsecondary students taking at least one online course in the US.

From 2002 through 2013 the most reliable measure of this metric has been the Babson Survey Research Group (BSRG) annual reporting. While there are questions on absolute numbers due to questions on definition of what makes a course “online”, the year-over-year growth numbers have been quite reliable and are the most-referenced numbers available. Starting last year, using Fall 2012 data, the official IPEDS data started tracking online education, and last week they put out Fall 2013 data – allowing year-over-year changes.

I shared the recent overall IPEDS data in this post, noting the following:

By way of comparison, it is worth noting the similarities to the Fall 2012 data. The percentage data (e.g. percent of a sector taking exclusive / some / no DE courses) has not changed by more than 1% (rounded) in any of the data. This unfortunately makes the problems with IPEDS data validity all the more important.

It will be very interesting to see the Babson Survey Research Group data that is typically released in January. While Babson relies on voluntary survey data, as opposed to mandatory federal data reporting for IPEDS, their report should have better longitudinal validity. If this IPEDS data holds up, then I would expect the biggest story for this year’s Babson report to be the first year of no significant growth in online education since the survey started 15 years ago.

I subsequently found out that BSRG is moving this year to use the IPEDS data for online enrollment. So we already have the best data available, and there is no discernible growth. Nationwide there are just 77,493 more students taking at least one online class, a 1.4% increase.

Y-o-Y Analysis

Why The Phrase “No Discernible Growth”?

Even though there was a nationwide increase of 77,493 students taking at least one online course, representing a 1.4% growth, there is too much noise in the data for this to be considered real growth. Even with the drop in total enrollment, the percentage of students taking at least one online course only changed from 26.4% TO 27.1%.

Just take one school – Suffolk County Community College – who increased by roughly 21,600 student enrollments taking at least one online course from 2012 to 2013 due to a change in how they report data and not from actual enrollment increases. More than a quarter of the annual nationwide increase can be attributed to this one reporting change[1]. These and similar issues are why I use the phrase “no discernible growth” – the year-over-year changes are now lower than the ability of our data collection methods to accurately measure.

Combine Babson and IPEDS Growth Data

While we should not directly compare absolute numbers, it is reasonable to combine the BSRG year-over-year historical growth data (2003 – 2012) with the new IPEDS data (2012 – 2013).

Y-o-Y Growth Chart

One thing to notice is that is really a long-term trend of declining growth in online. With the release of last year’s BSRG report they specifically called out this trend.

The number of additional students taking at least one online course continued to grow at a rate far in excess of overall enrollments, but the rate was the lowest in a decade.

What has not been acknowledged or fully understood is the significance of this rate hitting zero, at least within the bounds of the noise in data collection.

Implications

Think of the implications here if online education has stopped growing in US higher education. Many of the assumptions underlying institutional strategic plans and ed tech vendor market data is based on continued growth in online learning. It is possible that there will be market changes leading back to year-over-year growth, but for now the assumptions might be wrong.

Rather than focusing just on this year, the more relevant questions are based on the future, particularly if you look at the longer-term trends. Have we hit a plateau in terms of the natural level of online enrollment? Will the trend continue to the point of online enrollments actually dropping below the overall enrollment? Will online enrollments bottom out and start to rise again once we get the newer generation of tools and pedagogical approaches such as personalized learning or competency-based education beyond pilot programs?

I am not one to discount the powerful effect that online education has had and will continue to have in the US, but the growth appears to be at specific schools rather than broad-based increases across sectors. Southern New Hampshire, Arizona State University, Grand Canyon University and others are growing their online enrollments, but University of Phoenix, DeVry University and others are dropping.

One issue to track is the general shift from for-profit enrollment to not-for-profit enrollment, even if the overall rates of online courses has remained relatively stable within each sector. There are approximately 80,000 fewer students taking at least one online course at for-profit institutions while there are approximately 157,000 more students in the same category at public and private not-for-profit sectors.

I suspect the changes will continue to happen in specific areas – number of working adults taking courses, often in competency-based programs, at specific schools and statewide systems with aggressive plans – but it also appears that just making assumptions of broad-based growth needs to be reconsidered.

Update: Please note that the data release is new and these are early results. If I find mistakes in the data or analysis that changes the analysis above, I’ll share in an updated post.

  1. Russ Poulin and I documented these issues in a separate post showing the noise is likely in the low hundreds of thousands.

The post No Discernible Growth in US Higher Ed Online Learning appeared first on e-Literate.

Another Echo Hack from Noel

Oracle AppsLab - Tue, 2015-01-06 10:44

Noel (@noelportugal) spent a lot of time during his holidays geeking out with his latest toy, Amazon Echo. Check out his initial review and his lights hack.

For a guy whose name means Christmas, seems it was a logical leap to use Alexa to control his Christmas tree lights too.

Let’s take a minute to shame Noel for taking portrait video. Good, moving on, oddly, I found out about this from a Wired UK article about Facebook’s acquisition of Wit.ai, an interesting nugget in its own right.

If you’re interested, check out Noel’s code on GitHub. Amazon is rolling out another batch of Echos to those who signed up back when the device was announced in November.

How do I know this? I just accepted my invitation and bought my very own Echo.

With all the connected home announcements coming out of CES 2015, I’m hoping to connect Alexa to some of the IoT gadgets in my home. Stretch goal for sure, given all the different ecosystems, but maybe this is finally the year that IoT pushes over the adoption hump.

Fingers crossed. The comments you must find.Possibly Related Posts:

Performance Problems with Dynamic Statistics in Oracle 12c

Pythian Group - Tue, 2015-01-06 09:55

I’ve been making some tests recently with the new Oracle 12.1.0.2 In-Memory option and have been faced with an unexpected  performance problem.  Here is a test case:

create table tst_1 as
with q as (select 1 from dual connect by level <= 100000)
select rownum id, 12345 val, mod(rownum,1000) ref_id  from q,q
where rownum <= 200000000;

Table created.

create table tst_2 as select rownum ref_id, lpad(rownum,10, 'a') name, rownum || 'a' name2</pre>
from dual connect by level <= 1000;

Table created.

begin
dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'TST_1',
method_opt       => 'for all columns size 1',
degree => 8
);
dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'TST_2',
method_opt       => 'for all columns size 1'
);
end;
/
PL/SQL procedure successfully completed.

alter table tst_1 inmemory;

Table altered.

select count(*) from tst_1;

COUNT(*)
----------
200000000

Waiting for in-memory segment population:

select segment_name, bytes, inmemory_size from v$im_segments;

SEGMENT_NAME         BYTES INMEMORY_SIZE

--------------- ---------- -------------

TST_1           4629463040    3533963264

Now let’s make a simple two table join:

select name, sum(val) from tst_1 a, tst_2 b where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:00:00.17

Query runs pretty fast. Execution plan has the brand new vector transformation

Execution Plan
----------------------------------------------------------
Plan hash value: 213128033

--------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                          |     1 |    54 |  7756  (21)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION        |                          |       |       |            |          |
|   2 |   LOAD AS SELECT                  | SYS_TEMP_0FD9D66FA_57B2B |       |       |            |          |
|   3 |    VECTOR GROUP BY                |                          |     1 |    24 |     5  (20)| 00:00:01 |
|   4 |     KEY VECTOR CREATE BUFFERED    | :KV0000                  |     1 |    24 |     5  (20)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL            | TST_2                    |     1 |    20 |     4   (0)| 00:00:01 |
|   6 |   HASH GROUP BY                   |                          |     1 |    54 |  7751  (21)| 00:00:01 |
|*  7 |    HASH JOIN                      |                          |     1 |    54 |  7750  (21)| 00:00:01 |
|   8 |     VIEW                          | VW_VT_377C5901           |     1 |    30 |  7748  (21)| 00:00:01 |
|   9 |      VECTOR GROUP BY              |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  10 |       HASH GROUP BY               |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  11 |        KEY VECTOR USE             | :KV0000                  |   200K|  2539K|  7748  (21)| 00:00:01 |
|* 12 |         TABLE ACCESS INMEMORY FULL| TST_1                    |   200M|  1716M|  7697  (21)| 00:00:01 |
|  13 |     TABLE ACCESS FULL             | SYS_TEMP_0FD9D66FA_57B2B |     1 |    24 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter("NAME2"='50a')
   7 - access("ITEM_5"=INTERNAL_FUNCTION("C0") AND "ITEM_6"="C2")
  12 - inmemory(SYS_OP_KEY_VECTOR_FILTER("A"."REF_ID",:KV0000))
       filter(SYS_OP_KEY_VECTOR_FILTER("A"."REF_ID",:KV0000))

Note
-----
   - vector transformation used for this statement

After having such impressive performance I’ve decided to run the query in parallel:

select /*+ parallel(8) */ name, sum(val) from tst_1 a, tst_2 b
where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:01:02.55

Query elapsed time suddenly dropped from 0.17 seconds to the almost 1 minute and 3 seconds. But the second execution runs in 0.6 seconds.
The new plan is:

Execution Plan
----------------------------------------------------------
Plan hash value: 3623951262

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |     1 |    29 |  1143  (26)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                     |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)               | :TQ10001 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                    |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                      |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                   | :TQ10000 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY                 |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|*  7 |        HASH JOIN                    |          |   200K|  5664K|  1142  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|   8 |         JOIN FILTER CREATE          | :BF0000  |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|*  9 |          TABLE ACCESS FULL          | TST_2    |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|  10 |         JOIN FILTER USE             | :BF0000  |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |
|  11 |          PX BLOCK ITERATOR          |          |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|* 12 |           TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("A"."REF_ID"="B"."REF_ID")
   9 - filter("NAME2"='50a')
  12 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"A"."REF_ID"))
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"."REF_ID"))

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 8 because of hint

We can see a Bloom filter instead of key vector, but this is not the issue. Problem is coming from the “dynamic statistics used: dynamic sampling (level=AUTO)” note.
In 10046 trace file I’ve found nine dynamic sampling queries and one of them was this one:

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
  optimizer_features_enable(default) no_parallel result_cache(snapshot=3600)
  */ SUM(C1)
FROM
 (SELECT /*+ qb_name("innerQuery")  */ 1 AS C1 FROM (SELECT /*+
  NO_VECTOR_TRANSFORM ORDERED */ "A"."VAL" "ITEM_1","A"."REF_ID" "ITEM_2"
  FROM "TST_1" "A") "VW_VTN_377C5901#0", (SELECT /*+ NO_VECTOR_TRANSFORM
  ORDERED */ "B"."NAME" "ITEM_3","B"."REF_ID" "ITEM_4" FROM "TST_2" "B" WHERE
  "B"."NAME2"='50a') "VW_VTN_EE607F02#1" WHERE ("VW_VTN_377C5901#0"."ITEM_2"=
  "VW_VTN_EE607F02#1"."ITEM_4")) innerQuery

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1     43.92      76.33          0          5          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3     43.92      76.33          0          5          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  RESULT CACHE  56bn7fg7qvrrw1w8cmanyn3mxr (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   SORT AGGREGATE (cr=0 pr=0 pw=0 time=8 us)
         0          0          0    HASH JOIN  (cr=0 pr=0 pw=0 time=4 us cost=159242 size=2600000 card=200000)
 200000000  200000000  200000000     TABLE ACCESS INMEMORY FULL TST_1 (cr=3 pr=0 pw=0 time=53944537 us cost=7132 size=800000000 card=200000000)
         0          0          0     TABLE ACCESS FULL TST_2 (cr=0 pr=0 pw=0 time=3 us cost=4 size=9 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  asynch descriptor resize                        1        0.00          0.00
  Disk file operations I/O                        1        0.00          0.00
  CSS initialization                              1        0.00          0.00
  CSS operation: action                           1        0.00          0.00
  direct path write temp                       6267        0.02         30.37
********************************************************************************

Vector transformation is disabled, inefficient table order is fixed by the ORDERING hint and we are waiting for hash table creation based on huge TST_1 table.
Dynamic statistics feature has been greatly improved in Oracle 12c  with the support for joins and group by predicates. This is why we have such join during the parse time. Next document has the”Dynamic Statistics (previously known as dynamic sampling)” section inside: Understanding Optimizer Statistics with Oracle Database 12c where the new functionality is described.

Let’s make a simpler test:

select /*+ parallel(2) */ ref_id, sum(val) from tst_1 a group by ref_id;

Execution Plan
----------------------------------------------------------
Plan hash value: 2527371111

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                   |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                    |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                 | :TQ10000 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY               |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR          |          |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of hint

We can see a “dynamic statistics used” note again. It’s a simple query without predicates with the single table with pretty accurate statistics. From my point of view, here is no reason for dynamic sampling at all.
Automatic dynamic sampling was introduced in 11G Release 2. Description of this feature can be found in this document: Dynamic sampling and its impact on the Optimizer.
“From Oracle Database 11g Release 2 onwards the optimizer will automatically decide if dynamic sampling will be useful and what dynamic sampling level will be used for SQL statements executed in parallel. This decision is based on size of the tables in the statement and the complexity of the predicates”.
Looks like algorithm has been changed in 12c and dynamic sampling is triggered in a broader set of use cases.
This behavior can be disabled at statement, session or system level using the fix control for the bug 7452863. For example,
ALTER SESSION SET “_fix_control”=’7452863:0′;

Summary

Dynamic statistics has been enhanced in Oracle 12c, but this can lead to a longer parse time.
Automatic dynamic statistics is used more often in 12c which can lead to a parse time increase in the more cases than before.

Categories: DBA Blogs

Troubleshooting a Multipath Issue

Pythian Group - Tue, 2015-01-06 09:37

Multipathing allows to configure multiple paths from servers to storage arrays. It provides I/O failover and load balancing. Linux uses device mapper kernel framework to support multipathing.

In this post I will explain the steps taken to troubleshoot a multipath issue. This should provide an glimpse into the tools and technology involved. Problem was reported in a RHEL6 system in which a backup software is complaining that the device from which /boot is mounted does not exist.

Following is the device. You can see the device name is a wwid.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
[..]
/dev/mapper/3600508b1001c725ab3a5a49b0ad9848ep1
198337 61002 127095 33% /boot

File /dev/mapper/3600508b1001c725ab3a5a49b0ad9848ep1 is missing under /dev/mapper.

# ll /dev/mapper/
total 0
crw-rw—- 1 root root 10, 58 Jul 9 2013 control
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpatha -> ../dm-1
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathap1 -> ../dm-2
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathb -> ../dm-0
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathc -> ../dm-3
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp1 -> ../dm-4
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp2 -> ../dm-5
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvroot -> ../dm-6
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvswap -> ../dm-7

From /ect/fstab, it is found that UUID of the device is specified.

UUID=6dfd9f97-7038-4469-8841-07a991d64026 /boot ext4 defaults 1 2

From blkid, we can see the device associated with the UUID. blkid command prints the attributes of all block device in the system.

# blkid
/dev/mapper/mpathcp1: UUID=”6dfd9f97-7038-4469-8841-07a991d64026″ TYPE=”ext4″

Remounting the /boot mount point shows user friendly name /dev/mapper/mpathcp1.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
[..]
/dev/mapper/mpathcp1 198337 61002 127095 33% /boot

From this far, we can understand that the system is booting with wwid as device name. But later the device name is converted into user friendly name. In multipath configuration user_friendly_names is enabled.

# grep user_friendly_names /etc/multipath.confuser_friendly_names yes

As per Red Hat documentation,

“When the user_friendly_names option in the multipath configuration file is set to yes, the name of a multipath device is of the form mpathn. For the Red Hat Enterprise Linux 6 release, n is an alphabetic character, so that the name of a multipath device might be mpatha or mpathb. In previous releases, n was an integer.”

As the system is mounting the right disk after booting up, problem should be with the user friendly name configuration in initramfs. Extracting the initramfs file and checking the multipath configuration shows that user_friendly_names parameter is enabled.

# cat initramfs/etc/multipath.conf
defaults {
user_friendly_names yes

Now the interesting point is that, /etc/multipath/bindings is missing in initramfs. But the file is in the system. /etc/multipath/bindings file is used to refer wwid with alias.

# cat /etc/multipath/bindings
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
mpathc 3600508b1001c725ab3a5a49b0ad9848e
mpatha 36782bcb0005dd607000003b34ef072be
mpathb 36782bcb000627385000003ab4ef14636

initramfs can be created using dracut command.

# dracut -v -f test.img 2.6.32-131.0.15.el6.x86_64 2> /tmp/test.out

Building a test initramfs file shows that a newly created initramfs is including /etc/multipath/bindings.

# grep -ri bindings /tmp/test.out
I: Installing /etc/multipath/bindings

So this is what is happening,
When system boots up, initramfs looks for /etc/multipath/bindings for aliases in initramfs to use for user friendly names. But it could not find it and and uses wwid. After system boots up /etc/multipath/bindings is present and device names are changed to user friendly names.

Looks like the /etc/multipath/bindings file is created after kernel installation and initrd generation. This might have happened as multipath configuration was done after kernel installation. Even if the system root device is not on multipath, it is possible for multipath to be included in the initrd. For example, this can happen of the system root device is on LVM. This should be the reason why multupath.conf was included in the initramfs and not /etc/multipath/bindings.

To solve the issue we can to rebuild the initrd and restart the system. Re-installing existing kernel or installing new kernel would also fix the issue as the initrd would be rebuilt in both cases..

# dracut -v -f 2.6.32-131.0.15.el6.x86_64
Categories: DBA Blogs

Access Oracle GoldenGate JAgent XML from browser

DBASolved - Tue, 2015-01-06 09:26

There are many different ways of monitoirng Oracle GoldenGate; I have posted about many of these in earlier blog posts.  Additionally, I have talked about the different ways of monitoring Oracle GoldenGate at a few conferences as well.  (The slides can be found on my slideshare site if wanted).  In both my blog and presentations I highlight many different approaches; yet I forgot one that I think is really cool!  This one was shown to me by an Oracle Product Manager before Oracle Open World 2014 back in October (yes, I’m just now getting around to writing about it).  

This approach is using the Oracle GoldenGate Manager (port) to view a user friendly version of the XML that is passed by the Oracle Monitor Agent (JAgent) to monitoring tools like Oracle Enterprise Manager or Oracle GoldenGate Director.  This approach will not work with older versions of the JAgent.

Note: The Oracle Monitor Agent (JAgent) used in this approach is version 12.1.3.0.  It can be found here.  

Note: There is a license requirement to use this approach since this is part of the Management Pack for Oracle GoldenGate.  Contact you local sales rep for more info.

After the Oracle Monitor Agent (JAgent) is configured for your environment, the XML can be accessed via any web browser.  Within my test enviornment, I have servers named OEL and FRED.  The URLs needed to to view this cool feature are:

OEL:
http://oel.acme.com:15000/groups

FRED:
http://fred.acme.com:15000/groups

As you can see, by using the port number (15000) of the Manager process, I can directly tap into the information being feed to the management tools for monitoring.  The “groups” directory places you at the top level of the monitoring stack.  By clicking on a process groups, this will take you down into the process group and show additional items being monitored by the JAgent.

In this example, you are looking at the next level down for the process EXT on OEL.  At this point, you can see what is available: monitoring points, messages, status changes and associated files for the extract process.

OEL:
http://oel.acme.com:15000/groups/EXT


Digging further into the stack, you can see what files are associated with the process.  (This is an easy way to identify parameter files without having to go directly to the command line).

OEL:
http://oel.acme.com:15000/groups/EXT/files

OEL:
http://oel.acme.com:15000/groups/EXT/files/dirprm



As you can see, the new Oracle Monitor Agent (JAgent) provides you another way of viewing your Oracle GoldenGate environment without needing direct access to the server.  Although this is a cool way of looking at a Oracle GoldenGate environment, it does not replace traditionall monitoring approaches.  

Cool Tip: The OS tool “curl” can be used to dump similar XML output to a file (showed to me by the product team).

$ curl --silent http://oel.acme.com:15000/registry | xmllint --format -

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="/style/registry.xsl"?>
<registry xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://oel.acme.com:15000/schema/registry.xsd">
<process name="PMP" type="4" status="3"/>
<process name="EXT" type="2" mode="1" status="3"/>
<process name="MGR" type="1" status="3"/>
</registry>

In my opinion, many of the complants about the original version of the JAgent have been addressed with the latest release of the Oracle Monitor Agent (JAgent).  Give it a try!
 
Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Securing Big Data - Part 1

Steve Jones - Tue, 2015-01-06 09:00
As Big Data and its technologies such as Hadoop head deeper into the enterprise so questions around compliance and security rear their heads. The first interesting point in this is that it shows the approach to security that many of the Silicon Valley companies that use Hadoop at scale have taken, namely pretty little really.  It isn't that protecting information has been seen as a massively
Categories: Fusion Middleware

It's Spring for Oracle ACM API's

Darwin IT - Tue, 2015-01-06 08:24
Before the holiday season I was working on a service to receive e-mails in BPM using the UMS-email-adapter. Then process the attachments and the body and upload them to the Oracle ACM-case the email was meant for.

I won't get in too much detail here, since there are some articles on the use of ACM-API's like the ones of Niall Comminsky.

Unfortunately, until now, there are no WSDL/SOAP or REST services available on the ACM-API's, as they are on the Workflow Task API's.

However, it is not so hard to make the API's available as services. The trick is to wrap them up in a set of Java-beans, with one class with methods that do the jobs and create 'request and response beans' for the input parameters of the methods and the response.

A few years ago I wrote an article on using Spring components in SOA Suite 11g. This approach is still perfectly usable for SOA/BPM12c. And gives you a WSDL interface on the API's in near to no time.

There is one remark on the API's, though. That is on the creation of the the ACM Stream Service, or actually the creation of the BPMServiceClientFactory to get the context.

In the blog of Niall you'll read that you need to set the following context-properties:

        Map properties =
new HashMap();
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.CLIENT_TYPE,
BPMServiceClientFactory.REMOTE_CLIENT);
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.EJB_PROVIDER_URL,
"t3://localhost:7001");
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.EJB_SECURITY_CREDENTIALS,
cPwd);
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.EJB_SECURITY_PRINCIPAL,
cUser);
caseMgtAPI.mServiceClientFactory =
BPMServiceClientFactory.getInstance(properties, "default",
null);
Since in my case the service is running on the same server as the BPEL/BPM/ACM Process Engine is running, there's no need to create a connection (and thus provide an URL) and to authenticate as EJB_SECURITY_PRINCIPAL. So I found that the following suffices:
        Map properties =
new HashMap();
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.CLIENT_TYPE,
WorkflowServiceClientFactory.REMOTE_CLIENT);
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.EJB_INITIAL_CONTEXT_FACTORY,
"weblogic.jndi.WLInitialContextFactory");
BPMServiceClientFactory factory = BPMServiceClientFactory.getInstance(properties, null, null);
I would expect that  'WorkflowServiceClientFactory.REMOTE_CLIENT' should be'WorkflowServiceClientFactory.LOCAL_CLIENT', but need to verify that. The code above works in my case.
Update 12-1-2015: When using LOCAL_CLIENT I get the exception:
oracle.bpm.client.common.BPMServiceClientException: Cannot lookup Local EJB from a client. Try annotating it in the referred EJB. Veroorzaakt door: oracle.bpm.client.common.BPMServiceClientException: Cannot lookup Local EJB from a client. Try annotating it in the referred EJB.
So apparently you need to use REMOTE_CLIENT.

You do need to authenticate with the BPM user that is allowed to query the case, upload documents  etc. as follows:
 context = bpmFactory.getBPMUserAuthenticationService().authenticate(userName, userPassword.toCharArray(), null);
Hope this helps a little further in creating services on ACM.

Who is a DBA Leader?

Pakistan's First Oracle Blog - Tue, 2015-01-06 06:00
Sitting behind a big mahogany table, smoking Cuban Cigar, glaring at the person sitting across, one hand taking the receive of black phone to right ear, and the other hand getting the mobile phone off the left ear can be the image of a DBA boss in any white elephant government outfit, but it certainly cannot work in organization made up of professionals like database administrators. And if such image or similar image is working in any such company then that company is not great. It's as simple as that.






So who is DBA leader? The obvious answer is the person who leads a team of database administrators. Sounds simple enough, but it takes a lot to be a true leader. There are many DBA bosses at various layers, DBA managers at various layers, but being a DBA leader is very different. If you are a DBA leader, then you should be kinda worshiped. If you work in a team which has a DBA leader, then you are a very lucky person.

A DBA leader is the one who leads by an example. He walks the talk. He is the doer and not just talker. He inspires, motivates, and energizes the team members to follow him and then exceed his example. For instance, when client asks to improve that performance issue in the RAC cluster, the DBA leader would first jump in at the problem and start collaborating with team. He would analyze the problem, would present his potential solutions or at least line of action. He would engage the team and would listen to them. He won't just assing the problem to somebody, then disappear, and come back at 5pm asking about status. DBA leader is not super human, so he will get problems of which he won't have any knowledge. He will research the the problem with team and will learn and grow with them. That approach would electrify the team.

A DBA leader is a grateful person. He doesn't seem to thank his team enough for doing a great job. When under the able leadership of the DBA leader, team would reach to a solution, then regardless of his contribution, a DBA leader would make his team look awesome. That will generate immense prestige for the DBA leader at the same time, while making team looking great. Team would cherish the fact that solution was reached after deep insights of the DBA leader, and yet leader gave credit to them.

A DBA leader is the one who is always there. He falls before the team falls, and doesn't become aloof when things don't go well. Things will go wrong and crisis will come. In such situations, responsibility is shared and DBA leader doesn't shirk from it. In the team of DBA leader, there are no scapegoats.

A leader of DBAs keeps both big piture and ther details in perspective at the same time. He provides the vision and lives the vision from the front. He learns and then he leads. He does all of this and does it superbly and that is why he is the star and such a rare commodity, and that is why he he is the DBA LEADER.

Categories: DBA Blogs

Oracle Audit Vault Reports

The Oracle Audit Vault by default installs over one-hundred (100) reports.  This includes core audit reports as well as compliance reports. Reporting is a key feature of the Oracle Audit Vault and one which well-built as evidenced by the use of BI Publisher to allow for easy modification and creation of new reports.

Audit Reports

The audit reporting bundle installed by the default has the following categories –

  • Activity Reports
  • Entitlement
  • Stored Procedure Audit 
  • Alerts

The following table lists the audit reports installed by default –

Type

Report

Description

Activity 

Activity Overview

Digest of all captured audit events for a specified period of time

Activity 

Data Access

Details of audited read access to data for a specified period of time

Activity 

Data Modification

Details of audited data modifications for a specified period of time

Activity 

Data Modification Before-After Values

Details of audited data modifications for a specified period of time showing before and after values

Activity 

Database Schema Changes

Details of audited DDL activity for a specified period of time

Activity 

All Activity

Details of all captured audit events for a specified period of time

Activity 

Failed Logins

Details of audited failed user logins for a specified period of time

Activity 

User Login and Logout

Details of audited successful user logins and logouts for a specified period of time

Activity 

Entitlements Changes

Details of audited entitlement related activity for a specified period of time

Activity 

Audit Settings Changes

Details of observed user activity targeting audit settings for a specified period of time

Activity 

Secured Target Startup and Shutdown

Details of observed startup and shutdown events for a specified period of time

Entitlement 

User Accounts

Details of all existing user accounts

Entitlement 

User Accounts by Secured Target

User accounts by Secured Target report

Entitlement 

User Privileges

Details of audited failed user logins for a specified period of time

Entitlement 

User Privileges by Secured Target

User privileges by Secured Target report

Entitlement 

User Profiles

Digest of all existing user profiles

Entitlement 

User Profiles by Secured Target

User profiles by Secured Target report

Entitlement 

Database Roles

Digest of all existing database roles and application roles

Entitlement 

Database Roles by Secured Target

Database roles by Secured Target report

Entitlement 

System Privileges

Details of all existing system privileges and their allocation to users

Entitlement 

System Privileges by Secured Target

System privileges by Secured Target report

Entitlement 

Object Privileges

Details of all existing object privileges and their allocation to users

Entitlement 

Object Privileges by Secured Target

Object privileges by Secured Target report

Entitlement 

Privileged Users

Details of all existing privileged users

Entitlement 

Privileged Users by Secured Target

Privileged users by Secured Target report

Stored Procedure Audit 

Stored Procedure Activity Overview

Digest of all audited operations on stored procedures for a specified period of time

Stored Procedure Audit 

Stored Procedure Modification History

Details of audited stored procedure modifications for a specified period of time

Stored Procedure Audit 

Created Stored Procedures

Stored procedures created within a specified period of time

Stored Procedure Audit 

Deleted Stored Procedures

Stored procedures deleted within a specified period of time

Stored Procedure Audit 

New Stored Procedures

Latest state of stored procedures created within a specified period of time

Alerts

All Alerts

All alerts issued within a specified period of time

Alerts

Critical Alerts

All critical alerts issued within a specified period of time

Alerts

Warning Alerts

All warning alerts issued within a specified period of time

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Audit Vault
Categories: APPS Blogs, Security Blogs

MySQL Locking

Kubilay Çilkara - Tue, 2015-01-06 03:53
MySQL and Row Level Locking? Or why are you getting the error:

ERROR 1205 (HY000) : Lock wait timeout exceeded; try restarting transaction

You get the error because your allocated time to hold a DML lock in a transaction exceeds the set limit. Usually the default limit to hold a DML row lock, set by innodb_lock_wait_timeout db parameter, is 50 seconds. If your transaction doesn't commit/rollback within 50 seconds you will get this error. We don't want to hold locks for longer than 50 seconds anyway, throughput would be affected.

And yes MySQL in innodb uses row level locking. Since MySQL 5.1+ (time Oracle took over) it does row level locking in its InnoDB tables. That means only the rows which are selected . . . FOR UPDATE . . . are locked and not the  whole table. To see the threads (sessions) which are locking other threads and which queries are locking, use the following INFORMATION_SCHEMA dictionary SQL query as DBA.  You will be able to see blockers and waiters of transactions  waiting on locks. Run it as is using INFORMATION_SCHEMA schema, no modifications.

Use this SQL query to monitor locks and transactions and note that query will return data only when there are locks!


SELECT 
    r.trx_id AS wating_trx_id,
    r.trx_mysql_thread_id AS waiting_thread,
    TIMESTAMPDIFF(SECOND,
        r.trx_wait_started,
        CURRENT_TIMESTAMP) AS wait_time,
    r.trx_query AS waiting_query,
    l.lock_table AS waiting_table_lock,
    b.trx_id AS blocking_trx_id,
    b.trx_mysql_thread_id AS blocking_thread,
    SUBSTRING(p.host,
        1,
        INSTR(p.host, ':') - 1) AS blocking_host,
    SUBSTRING(p.host,
        INSTR(p.host, ':') + 1) AS blocking_port,
    IF(p.command = 'Sleep', p.time, 0) AS idle_in_trx,
    b.trx_query AS blocking_query
FROM
    information_schema.innodb_lock_waits AS w
        INNER JOIN
    information_schema.innodb_trx AS b ON b.trx_id = w.blocking_trx_id
        INNER JOIN
    information_schema.innodb_trx AS r ON r.trx_id = w.requesting_trx_id
        INNER JOIN
    information_schema.innodb_locks AS l ON w.requested_lock_id - l.lock_id
        LEFT JOIN
    information_schema.PROCESSLIST AS p ON p.id = b.trx_mysql_thread_id
ORDER BY wait_time DESC;

Categories: DBA Blogs

Don't forget to configure Power Management settings on Hyper-V

Yann Neuhaus - Tue, 2015-01-06 02:51

Recently I had the opportunity to audit a SQL Server database hosted on a Hyper-V 2012 cluster. I noticed that the guest operating system had the Power Plan configured to High performance. This is great thing but when I talked to the system administrator to verify if the same option is turned on on the Hyper-V operating system, this was unfortunately not the case. 

As a reminder, the power policy setting has no effect on the guest operating system in case of virtual environments and we always have to verify if this option is configured correctly at the right level. 

I performed a quick demonstration to my customer by using the SuperPI benchmark tool that is pretty simple: it calculates pi to a specific number of digits by using one thread and for my purpose it's sufficient. 

--> Let's have the situation when Power Saver is enabled on the Hyper-V side and High performance turned on on the guest side. Then let's run SuperPI tool with 512K of digit to compute:

 

blog_25_-_superpi_calculation_-_power_saving

 

Here the time taken by the guest to calculate pi:

 

blog_25_-_superpi_calculation_-_power_saving_-_result

 

 

Now let's change the story by reversing the power settings value: High performance on the Hyper-V side and Power Saver on the guest side. Then we can do the same benchmark test:

 

blog_25_-_superpi_calculation_-_high_perf_-_result

 

 5,688 seconds for this test against 13,375 seconds for the first test - 57% of improvement .. not so bad :-) but let's have a more suitable situation. Indeed in most configurations power management setting is configured to Balanced by default and my customer asked me if there is a noticable difference if we leave the default configuration. In order to justify my recommandation we performed the same test but this time I decided to change the number of digits to compute to simulate a more realistic OLTP transaction (short and requiere all CPU resources during a short time). The table lists and compare the both results:

 

Settings Duration (s) Hyper - V : Load balancing 0.219 Hyper - V : High performance  0.141 

 

We can notice a 64% of CPU time improvement in the context of my customer! So after that, my customer was convinced to change this setting and I hope it is the same for you! Of course with long running queries that consume a lot of CPU resources during a long time the difference may be less discernible because the processor wake-up time is very small compared to the total worker time consumed by them.

Keep in mind that changing Power Management state from the guest has no effect on virtualized environment. You must take care of this setting directly on the hypervisor.

Happy virtualization !!

Is Oracle going mobile?

Chris Foot - Tue, 2015-01-06 01:40

Factoring a mobile workforce into a business's enterprise application infrastructure is a consideration many CIOs are making nowadays.

Bring-your-own-device has a number of implications regarding database security, accessibility, operating system compatibility and a wealth of other factors. Constructing and maintaining an ecosystem designed to accommodate personnel using mobile devices to access enterprise software through public networks is more than a best practice – it's a necessity.

Oracle makes enterprise mobility a little easier
Enterprises using Oracle's E-Business Suite applications would do well to regard the developer's Mobile Application Framework, which allows developers to create single-source mobile apps capable of being deployed across multiple OSes. Nation Multimedia reported that MAF provides programmers with a set of tools that allows them to fabricate software that can satisfy the demands incited by the mobile workforce.

Oracle Asia Pacific Vice President for Asean Fusion Middleware Sales Chin Ying Loong spoke with the source, asserting that enterprises need platforms that allow them to provide apps through whatever devices their employees choose to use, whether they be Apple tablets or Android phones.

"The trick for organizations today is to implement their own end-to-end mobile platforms, and to keep things simple," said Loong, as quoted by Nation Multimedia. "Simplicity is crucial to the rapid and effective integration of business data with user-friendly mobile applications. The cloud in particular offers businesses an excellent back-end platform to support their mobility solutions in a simple and cost-effective manner."

Has the mobile workforce really arrived?
BYOD isn't a trend of the future, but an occurrence of the present. MarketsandMarkets found that the enterprise mobility market will increase to $266.17 billion in 2019 at a compound annual growth rate of 25.5 percent from 2014 to 2019. IDC predicted that by next year, the number of mobile employees will reach 1.3 billion – approximately 37 percent of the global workforce.

Smart Dog Services' Alison Weiss commented on these statistics, acknowledging that the average IT department has a budget of $157.00 per device per worker, an expenditure that is anticipated to reach $242 per device per employee by 2016.

Given these developments, it's important for enterprises to consider which kind of applications personnel will attempt to access via mobile devices. For instance, cloud storage services for saving documents, enterprise resource planning software and customer relationship management solutions are all technologies mobile workers would strive to use while on the go.

The post Is Oracle going mobile? appeared first on Remote DBA Experts.

Managed Backup with SQL Server 2014

Yann Neuhaus - Mon, 2015-01-05 21:37

In a previous blog post called Backup a SQL Server database from On-Premise to Azure, I presented the different tools to backup your on-premise databases on Azure Storage. SQL Server Managed Backup to Windows Azure was one of these tools.

In my opinion, Managed Backup is a great tool. That is why I decided to dedicate an entire blog to this feature.

 

Understanding Managed Backup

Managed Backup is a new feature introduced in SQL Server 2014 working with Windows Azure. This feature allows to manage and automate SQL Server backups (from your on-premise or Azure SQL Server instance), configurable by … script only (T-SQL or PowerShell)!

Microsoft recommends to use Managed Backup for Windows Azure virtual machines.

Managed backup only works with user databases in Full or Bulk-logged Recovery Model, and can only perform Full and Log backups.

SQL Backups supports a point in time restore, and are stored following a retention period. This setting indicates the desired lifespan of a backup stored in Azure Storage. Once the period is reached, the backup is deleted.

SQL Backups are scheduled following the transaction workload of the database.

A full database backup is scheduled when:

  • The Managed backup feature is enabled for the first time
  • The log growth is 1 GB or larger
  • The last full database is older than 1 week
  • The log chain is broken

A transaction log backup is scheduled when:

  • No log backup history is available
  • The log space is 5 MB or larger
  • The last log backup is older than 2 hours
  • A full database has been performed

 

Configuring Managed Backup

First, you need to activate SQL Server Agent service in order to use the feature.

In this example, I have 3 user databases as follows:

 

Database Named

Recovery Model

Data Files Location

AdventureWorks2012

Simple

On-premise

AdventureWorks2014

Full

On-premise

Hybriddb

Bulk-logged

Azure Storage

 

Managed Backup can be enabled at the instance level or database level.

If you decide to activate the feature at the instance level, the configuration will be set for all user databases of your instance (even for databases added after the configuration).

On the other hand, you can activate the feature for specific user databases. If the feature is also configured at the instance level, it will be overridden by the configuration at the database level.

To configure the feature, you must provide a set of parameters:

  • The URL of the Azure Storage
  • The retention period in days
  • The credential name
  • The encryption algorithm

If the encryption algorithm is not set to ‘NO_ENCRYPTION’, you also need to provide these parameters:

  • The encryptor type
  • The encryptor name

Moreover, when you configure your Managed Backup, you need to specify if you want to activate your Managed Backup.

 

You can perform a database backup with COPY_ONLY. To do this, you need to use 'smart_admin.sp_backup_on_demand' stored procedure, by specifying the database name.

However, this stored procedure will use the configuration of the Managed Backup at the database level. That means you must configure and enable the Managed Backup for your database.

 

We need to create a credential in order to be able to connect to Azure Storage:

 

CREATE CREDENTIAL dbiservices

WITH IDENTITY = 'dbiservices',

SECRET = 'password'

 

Let’s configure our Managed Backup at the instance level:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @enable_backup = 0,

    @storage_url = 'https://dbiservices.blob.core.windows.net',

    @retention_days = 15,

    @credential_name = 'dbiservices';

    @encryption_algorithm = 'NO_ENCRYPTION';

 

 

 

 

 

 

 

 

 

If I want to display the instance configuration:

 

USE msdb;

GO

SELECT * FROM smart_admin.fn_backup_instance_config();

 

Here is the result:

 

display-configuration.png

 

We will override the Managed Backup configuration for ‘hybriddb’ database:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @database_name = 'hybriddb',

    @enable_backup = 0,

    @credential_name = 'dbiservices',

    @storage_url = 'https://dbiservices.blob.core.windows.net,

    @retention_days = 25,

    @encryption_algorithm = 'NO_ENCRYPTION';

 

If I want to display the database configuration of all databases of the instance:

 

USE msdb;

SELECT db_name, is_managed_backup_enabled, retention_days, storage_url, encryption_algorithm

FROM smart_admin.fn_backup_db_config(NULL)

 

Here is the result:

 

diplay-databases-configuration.png

 

Notice that ‘AdventureWorks2012’ database has ‘is_managed_backup_enabled’ set to ‘NULL’. Indeed, this database is not sensitive to Managed Backup because it has his Recovery Model set to Simple.

 

Now, I activate the Managed Backup at the instance level:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @enable_backup = 1;

GO

 

Now, I activate the Managed Backup for ‘hybriddb’ database:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @database_name = 'hybriddb',

    @enable_backup

 

If I explore Azure Storage, I can find my backups:

 

backups.png

 

 

Conclusion

As I said in Introduction, Managed Backup is a great feature. Easily and quicly, you can configure and enable backups for your user databases.

However, it has some serious limitations... We can expect Managed Backup to be extended to system databases. Moreover, we can also expect Managed Backup to allow backups from user databases in Simple Recovery Model.

Furthermore, this feature is only available to Azure Storage. Indeed, I would like to choose my storage destination. I do not understand why we cannot back up to local disks for example.

Highlight negative numbers in an APEX Report (css only)

Dimitri Gielis - Mon, 2015-01-05 17:30
Here's a screenshot of the result we want: the negative numbers are highlighted in red.

There're many ways to achieve highlighting certain areas in a report, but depending the complexity of the logic that defines what gets highlighted I use one of the following three techniques:
  1. CSS only
  2. CSS and a Dynamic Action with one line of JQuery
  3. CSS and a column in the SQL query that defines the class
In this post I will explain the first technique, the other two are for future posts.
CSS only solution to highlight a negative number
Create a classic Report (the same technique works for an Interactive Report). Edit the number column(s) you want to turn red in case it's negative and modify the Column Formatting as below.

I'm wrapping a span around my column (AMOUNT) and use the HTML5 data- attribute to store the number in that attribute. Doing this will allow me to use a CSS selector to see if it's a negative number or not. Unlike JQuery, CSS doesn't have a :contains selector, if it did, we didn't have to create the extra data-number attribute.

In the Page Attributes, in the CSS section we add following inline css (note you can add this to your Page Template too, so it works for all pages):

What the CSS is doing: every span with as attribute data-number that contains a - (dash - which means it's a negative number) we give the color red. 
That's it...

You find the online example here: https://www.apexrnd.be/ords/f?p=DGIELIS_BLOG:REPORT_HIGHLIGHT_CSS

Categories: Development

Annonce : Mise à jour des plugins Enterprise Manager 12c

Jean-Philippe Pinte - Mon, 2015-01-05 16:51
La dernière version des plugins pour Enterprise Manager 12c vient d'être annoncée.
  • Enterprise Manager for Cloud – version 12.1.0.9.0
  • Enterprise Manager for Oracle Cloud Framework – version 12.1.0.2.0
  • Enterprise Manager for Storage Management – version 12.1.0.5.0
  • Enterprise Manager for Oracle Database – version 12.1.0.7.0
  • Enterprise Manager for Oracle Fusion Middleware – version 12.1.0.7.0
  • Enterprise Manager for Oracle Audit Vault and Database Firewall – version 12.1.0.3.0
Cette dernière version des plugins apportent beaucoup de nouveautés intéressantes : Cloud Maintenance, Data Refresh (pour le cycle de vie de la donnée DBaaS), SOA as a Service, OSB as a Service, etc

Ces plugins peuvent bien évidemment être téléchargés via le "Self Update" d' Oracle Enterprise Manager 12c.


Plus d'informations :

    CVSS version 3.0 Preview 2

    Oracle Security Team - Mon, 2015-01-05 16:48
    Normal 0 false false false EN-US X-NONE X-NONE

    Hello, this is Darius Wiles.

    Oracle has been using the Common Vulnerability Scoring System (CVSS) in Critical Patch Update advisories and Security Alerts for over 8 years. CVSS version 2.0 is the current standard, but the CVSS Special Interest Group (SIG), acting on behalf of FIRST, has recently published a preview of the upcoming CVSS version 3.0 standard.

    The CVSS version 3.0 preview represents a near final version of the standard and includes metric and vector strings, formula, scoring examples and a calculator. These are all available at the CVSS version 3.0 development site at http://www.first.org/cvss/v3/development. The official public comment period is scheduled to last through February 28, 2015 and we encourage everyone with an interest in CVSS to review the preview and provide feedback to cvss-v3-comments@first.org.

    Eric Maurice wrote a blog post a few years ago that explains how Oracle uses CVSS version 2.0, including the reasons Oracle added a Partial+ custom score for Confidentiality, Integrity and Availability metrics. A major improvement planned for version 3.0 is the addition of a Scope metric that provides a more generic way to indicate if the impact of a vulnerability extends beyond the component that contains the vulnerability. This new ‘Scope’ metric will eliminate the need for Oracle to use a Partial+ custom score.

    The version 2.0 Access Complexity metric was a combination of several concepts, sometimes making it difficult to know which value to assign when some concepts were high risk and some low risk for a given vulnerability. Version 3.0 splits the privileges required by an attacker and whether the attack requires user (victim) interaction into separate, new metrics.

    Version 3.0 also clarifies at which stage of an attack a CVSS score should be calculated. Because Version 2.0 did not offer this guidance, it could lead to variations in CVSS scores between organizations. Version 3.0 provides greater clarity by stating, essentially, that a CVSS score should be calculated when the first impact occurs.

    This is just a high-level overview of some of the changes, and we've glossed over some important details. We encourage you to take a look at the preview and provide feedback to the SIG before the end of the comment period. We are excited about the planned improvements to version 3.0 and hope to move to the new standard in our alerts and advisories soon after the final standard is published.

    For More Information:

    The CVSS version 3.0 development site is located at http://www.first.org/cvss/v3/development

    Oracle’s use of the CVSS 2.0 Scoring System is explained at http://www.oracle.com/technetwork/topics/security/cvssscoringsystem-091884.html

    Oracle multitenant dictionary: upgrade

    Yann Neuhaus - Mon, 2015-01-05 16:10

    This is a second part of the previous post about metadata link. I've shown how a sharing=metadata function becomes a sharing=none function when it is changed in the pdb - i.e when not having the same DDL, not having a different signature.

    Here is another experimentation doing the opposite: change the function in root and see what happens in the pdb. Again playing with internals in order to understand the 'upgrade by unplug-plug' feature available in 12c multi-tenant (and single-tenant).