Feed aggregator

OGh Tech Experience 2017 : Day 1 – Rumours of my demise

Tim Hall - Thu, 2017-06-15 12:39

For those that missed me, I did actually turn up to the event today. I arrived at about 11:00, got my badge, sat down in the speaker room for a bout 2 minutes, then left and came back to the hotel.

I was ill most of last week and I thought I was a lot better, but the flight yesterday and the heat since I’ve been here have really put me back. My sinuses feel like they are going to explode and I have zero energy.

I seem to be jinxed where the Netherlands are concerned. Last year I got ill between the AMIS and OGh events. This time it’s come slap bang on the OGh event.

Unless something changes drastically overnight, my plan for tomorrow is to arrive late, do my two presentations, then leave and get back to bed. I fly home the following morning.

Cheers

Tim…

OGh Tech Experience 2017 : Day 1 – Rumours of my demise was first posted on June 15, 2017 at 6:39 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

12c NSSn process for Data Guard SYNC transport

Yann Neuhaus - Thu, 2017-06-15 10:15

In a previous post https://blog.dbi-services.com/dataguard-wait-events-have-changed-in-12c/ I mentioned the new processes NSA for ASYNC transport and NSS for SYNC transport. I’m answering a bit late to a comment about the number of processes: yes there is one NSSn process per LOG_ARCHIVE_DEST_n destination in SYNC and the numbers match.

Here is my configuration with two physical standby:
DGMGRL> show configuration
 
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 56 seconds ago)

Both are in SYNC:
DGMGRL> show database orclb logxptmode;
LogXptMode = 'sync'
DGMGRL> show database orclc logxptmode;
LogXptMode = 'sync'

I can see two NSS processes:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 4952 1 0 16:05 ? 00:00:00 ora_nss3_ORCLA
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

Here are the two log archive dest:
SQL> select name,value from v$parameter where regexp_like(name,'^log_archive_dest_[23]$');
NAME VALUE
---- -----
log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
log_archive_dest_3 service="ORCLC", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclc" net_timeout=30, valid_for=(online_logfile,all_roles)

I set the 3rd one in ASYNC:
DGMGRL> edit database orclc set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS3 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

I set the 2nd destination to ASYNC:
DGMGRL> edit database orclb set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS2 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"

Now starting the 3rd destination first:
DGMGRL> edit database orclc set property logxptmode=SYNC;
Property "logxptmode" updated

I can see that nss3 has started as it is the log_archive_dest_3 which is in SYNC now:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA

Then starting the second one:
DGMGRL> edit database orclb set property logxptmode=SYNC;
Property "logxptmode" updated

Here are the two processes:

DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA
oracle 5393 1 0 16:20 ? 00:00:00 ora_nss2_ORCLA

So if you see some SYNC Remote Write events in ASH, look at the program name to know which destination it is.

 

Cet article 12c NSSn process for Data Guard SYNC transport est apparu en premier sur Blog dbi services.

A first look at EDB Postgres Enterprise Manager 7 beta – Connecting a PostgreSQL instance

Yann Neuhaus - Thu, 2017-06-15 10:14

In the last post we did a click/click/click setup of the PEM server. What we want to do now is to attach a PostgreSQL instance to the PEM server for being able to monitor and administer it. For that we need to install the PEM agent on a host where we have a PostgreSQL instance running (192.168.22.249 in my case, which runs PostgreSQL 10 Beta1). Lets go …

As usual, when you want to have the systemd services generated automatically you should run the installation as root:

[root@pemclient postgres]# ls
pem_agent-7.0.0-beta1-1-linux-x64.run
[root@pemclient postgres]# chmod +x pem_agent-7.0.0-beta1-1-linux-x64.run 
[root@pemclient postgres]# ./pem_agent-7.0.0-beta1-1-linux-x64.run 

The installation itself is not a big deal, just follow the screens:

pem_agent1
pem_agent2
pem_agent3
pem_agent4
pem_agent5
pem_agent6
pem_agent7
pem_agent8
pem_agent9

Once done we have a new systemd service:

[root@pemclient postgres]# systemctl list-unit-files | grep pem
pemagent.service                              enabled 

… and the processes that make up the PEM agent:

[root@pemclient postgres]# ps -ef | grep pem
root      3454     1  0 16:40 ?        00:00:00 /u01/app/postgres/product/pem7/agent/agent/bin/pemagent -c /u01/app/postgres/product/pem7/agent/agent/etc/agent.cfg
root      3455  3454  0 16:40 ?        00:00:00 /u01/app/postgres/product/pem7/agent/agent/bin/pemworker -c /u01/app/postgres/product/pem7/agent/agent/etc/agent.cfg --pid 3454
root      3515  2741  0 16:43 pts/0    00:00:00 grep --color=auto pem

Heading back to the PEM web interface the new agent is visible immediately:
pem_agent10

So, lets add the instance:
pem_agent11
pem_agent12
pem_agent17
pem_agent14
pem_agent15

Of course, we need to allow connections to our PostgreSQL instance from the PEM server. Adding this to the pg_hba.conf and reloading the instance fixes the issue:

host    all             all             192.168.22.248/32       md5

Once done:
pem_agent16

… and the instance is there.

In the next post we’ll setup some monitoring for our newly added PostgreSQL instance.

 

Cet article A first look at EDB Postgres Enterprise Manager 7 beta – Connecting a PostgreSQL instance est apparu en premier sur Blog dbi services.

A first look at EDB Postgres Enterprise Manager 7 beta

Yann Neuhaus - Thu, 2017-06-15 08:09

In case you missed it: EnterpriseDB has released the beta of Postgres Enterprise Manager 7 beta. When installation is as easy as for the current version that should just be a matter of clicking next, lets see.

Because the installer will create the systemd services installation should be done as root:

[root@edbpem tmp]$ ls -l
total 289076
-rw-r--r--. 1 root root 296009946 Jun  1 18:58 pem_server-7.0.0-beta1-1-linux-x64.run
[root@edbpem tmp]$ chmod +x pem_server-7.0.0-beta1-1-linux-x64.run 
[root@edbpem tmp]$ ./pem_server-7.0.0-beta1-1-linux-x64.run 

From now on all is graphical and straight forward:

pem7_1
pem7_2
pem7_3
pem7_4
pem7_5
pem7_6
pem7_7
pem7_8
pem7_9
pem7_10
pem7_11
pem7_12
pem7_13
pem7_14
pem7_15
pem7_16
pem7_17
pem7_18
pem7_19
pem7_20
pem7_21
pem7_22
pem7_23
pem7_24
pem7_25
pem7_26
pem7_27
pem7_28
pem7_29
pem7_30
pem7_31
pem7_32

What you get from a process perspective is this:

[root@edbpem tmp]$ ps -ef | grep pem
postgres 13462     1  0 19:17 ?        00:00:00 /u01/app/postgres/product/96/db_2/bin/postgres -D /u02/pgdata/pem
root     13869     1  0 19:18 ?        00:00:00 /u01/app/postgres/product/pem7/agent/bin/pemagent -c /u01/app/postgres/product/pem7/agent/etc/agent.cfg
root     13870 13869  0 19:18 ?        00:00:00 /u01/app/postgres/product/pem7/agent/bin/pemworker -c /u01/app/postgres/product/pem7/agent/etc/agent.cfg --pid 13869
postgres 13883 13462  1 19:18 ?        00:00:02 postgres: agent1 pem 127.0.0.1(53232) idle
postgres 13885 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53234) idle
postgres 13886 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53236) idle
postgres 13887 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53238) idle
postgres 13888 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53240) idle
pem      13938 13937  0 19:18 ?        00:00:00 EDBPEM                                                              -k start -f /u01/app/postgres/product/EnterpriseDB-ApacheHTTPD/apache/conf/httpd.conf
root     14301 11016  0 19:20 pts/0    00:00:00 grep --color=auto pem

Two new systemd services have been created so PEM should startup and shutdown automatically when the server reboots:

[root@edbpem tmp]# systemctl list-unit-files | egrep "pem|postgres"
pemagent.service                              enabled 
postgresql-9.6.service                        enabled 

Lets connect to PEM: https://192.168.22.248:8443/pem

pem7_33

If you have an EDB subscription now it is the time to enter the product key:

pem7_34

What you immediately can see is that the look and feel changed to that of pgadmin4 (A fat client for PEM as in the current version is not available any more):

pem7_35
pem7_36
pem7_37

In a next post we’ll add a PostgreSQL instance and start to monitor it.

 

Cet article A first look at EDB Postgres Enterprise Manager 7 beta est apparu en premier sur Blog dbi services.

Take Advantage of Oracle Software Security Assurance

Oracle Security Team - Thu, 2017-06-15 07:00

In a previous blog entry (What is Assurance and Why Does It Matter?), Mary Ann Davidson explains the importance of Security Assurance and introduces Oracle Software Security Assurance, Oracle’s methodology for building security into the design, build, testing, and maintenance of its products.

The primary objective of software security assurance is to help ensure that security controls provided by software are effective, work in a predictable fashion, and are appropriate for that software. The purpose of ongoing security assurance is to make sure that this objective continues to be met over time (throughout the useful life of software).

The development of enterprise software is a complex matter. Even in mature development organizations, bugs still occur, and the use of automated tools does not completely prevent software defects. One important aspect of ongoing security assurance is therefore to remediate security bugs in released code. Another aspect of ongoing security assurance is to ensure that the security controls provided by software continue to be appropriate when the use cases for software change. For example, years ago backups were performed mostly on tapes or other devices physically connected to the server being backed up, while today many backups are performed over private or public networks and sometimes stored in a cloud. Finally, other aspects for ongoing security assurance activities include changing threats (e.g., new attack methods) or obsolete technologies (e.g., deprecated encryption algorithms).

Oracle customers need to take advantage of Oracle ongoing security assurance efforts in order to preserve over time their security posture associated with their use of Oracle products. To that end, Oracle recommends that customers remain on actively-supported versions and apply security fixes as quickly as possible after they have been published by Oracle.

Introduced in 2005, the Critical Patch Update program is the primary mechanism for the backport of security fixes for all Oracle on-premises products. The Critical Patch Update is Oracle’s program for the distribution of security fixes in previously-released versions of Oracle software. Critical Patch Updates are regularly scheduled: they are issued quarterly on the Tuesday closest to the 17th of the month in January, April, July, and October. This fixed schedule is intended to provide enough predictability to enable customers to apply security fixes in normal maintenance windows. Furthermore, the dates of the Critical Patch Update releases are intended to fall outside of traditional "blackout" periods when no changes to production systems are typically allowed (e.g., end of fiscal years or quarters or significant holidays).

Note that in addition to this regularly-scheduled program for security releases, Oracle retains the ability to issue out of schedule patches or workaround instructions in case of particularly critical vulnerabilities and/or when active exploits are reported "in the wild." This program is known as the Security Alert Program.

Critical Patch Update and Security Alert fixes are only provided for product versions that are "covered under the Premier Support or Extended Support phases of the Lifetime Support Policy." This means that Oracle does not backport fixes to product versions that are out of support. Furthermore, unsupported product releases are not tested for the presence of vulnerabilities. It is, however, common for vulnerabilities to be found in legacy code, and vulnerabilities fixed in a given Critical Patch Update release can also affect older product versions that are no longer supported.

As a result, organizations choosing to continue to use unsupported systems face increasing risks over time. Malicious attackers are known to reverse-engineer the content of published security fixes and it is common for exploit code to be to be published in hacking frameworks soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Continuing to use unsupported systems can therefore have two serious implications:(a) Unsupported releases are likely to be affected by vulnerabilities which are not known by the affected software user because these releases are no longer subject to ongoing security assurance activities, and
(b) Unsupported releases are likely to be vulnerable to flaws that are known by malicious perpetrators because these bugs have been fixed (and publicly disclosed) in subsequent releases.

Unfortunately, security studies continue to report that in addition to human errors and systems misconfigurations, the lack of timely security patching constitutes one of the greatest reasons for the compromise of IT systems by malicious attackers. See for example, the Federal Trade Commission’s paper "Start with Security: A Guide for Business", which recommends that organizations have effective means to keep up with security releases of their software (whether commercial or open source). Delays in security patching and overall lapses in good security hygiene have plagued IT organizations for years. In many instances, organizations will report the "fear of breaking something in a business-critical system" as the reason for not keeping up with security patches. Here lies a fundamental paradox: a given system may be considered too important to fail (or temporarily brought offline), and this is the reason why it is not kept up to date with security patches! The hope for these organizations is that the known system availability interruption outweighs the potential impact of a security incident that could result from not keeping up with a security release. This amounts to driving a car with very little gas left in the tank and thinking "I don’t have time to stop at the gas station, because I really need my car and I am too busy to gas up." Obviously, the scarcity of technical personnel and the costs associated with testing complex applications and deploying patches further exacerbate the problem. The larger the IT environment, the more complex, and the more operation-critical, the greater is the "to patch or not to patch" conundrum.

In recent years, Oracle has issued stronger caution against postponing the application of security fixes or knowingly continuing to use unsupported versions. For example, the April 2017 Critical Patch Update Advisory includes the following warning: "Oracle continues to periodically receive reports of attempts to maliciously exploit vulnerabilities for which Oracle has already released fixes. In some instances, it has been reported that attackers have been successful because targeted customers had failed to apply available Oracle patches. Oracle therefore strongly recommends that customers remain on actively-supported versions and apply Critical Patch Update fixes without delay." Keeping up with security releases is simply a critical requirement for preserving the security posture of an IT environment, regardless of the technologies (or vendors) in use.

Unify - An Insight Into the Product

Rittman Mead Consulting - Thu, 2017-06-15 06:00
Unify - An Insight Into the Product

Monday, 12 Jun saw the official release of Unify, Rittman Mead's very own connector between Tableau and OBIEE. It provides a simple but powerful integration between the two applications that allows you to execute queries through OBIEE and manipulate and render the datasets using Tableau.

Unify - An Insight Into the Product

Why We Made It

One of the first questions of course would be why we would want to do this in the first place. The excellent thing about OBI is that it acts as an abstraction layer on top of a database, allowing analysts to write efficient and secure reports without going into the detail of writing queries. As with any abstraction, it is a trade of simplicity for capability. Products like Tableau and Data Visualiser seek to reverse this trade, putting the power back in the hands of the report builder. However, without quoting Spiderman, care should be taken when doing this.

The result can be that users write inefficient queries, or worse still, incorrect ones. We know there will be some out there that use self service tools as purely a visualisation engine, simply dropping pre-made datasets into it. If you are looking to produce sustainable, scalable and accessible reporting systems, you need to tackle the problem both at the data acquisition stage as well as the communication stage at the end.

If you are already meeting both requirements, perhaps by using OBI with Data Visualiser (formerly Visual Analyser) or by other means then that's perfectly good. However, We know from experience that there are many of you out there that have already invested heavily into both OBI and Tableau as separate solutions. Rather than have them linger in a state of conflict, we'd rather we nurse them into a state of symbiosis.

The idea behind Unify is that it bridges this gap, allowing you to use your OBIEE system as an efficient data acquisition platform and Tableau as an intuitive playground for users who want to do a a bit more with their data. Unify works by using the Tableau Web Data Connector as a data source and then our customised software to act as an interface for creating OBIEE queries and them exporting them into Tableau.

How It Works

Unify uses Tableau's latest Web Data Connector data source to allow us to dynamically query OBIEE and extract data into Tableau. Once a dataset is extracted into Tableau, it can be used with Tableau as normal, taking advantages of all of the powerful features of Tableau. This native integration means you can add in OBIEE data sources just as you would add in any others - Excel files, SQL results etc. Then you can join the data sources using Tableau itself, even if the data sources don't join up together in the background.

First you open up Tableau and add a Web Data Connector source:

Unify - An Insight Into the Product

Then give the link to the Unify application, e.g. http://localhost:8080/unify. This will open up Unify and prompt you to login with your OBIEE credentials. This is important as Unify operates through the OBIEE server layer in order to maintain all security permissions that you've already defined.

Unify - An Insight Into the Product

Now that the application is open, you can make OBIEE queries using the interface provided. This is a bit like Answers and allows you to query from any of your available subject areas and presentation columns. The interface also allows you to use filtering, column formulae and OBIEE variables much in the same way as Answers does.

Alternatively, you can open up an existing report that you've made in OBIEE and then edit it at your leisure. Unify will display a preview of the dataset so you can tweak it until you are happy that is what you want to bring into Tableau.

Unify - An Insight Into the Product

Once you're happy with your dataset, click the Unify button in the top right and it will export the data into Tableau. From this point, it behaves exactly as Tableau does with any other data set. This means you can join your OBIEE dataset to external sources, or bring in queries from multiple subject areas from OBIEE and join them in Tableau. Then of course, take advantage of Tableau's powerful and interactive visualisation engine.

Unify - An Insight Into the Product

Unify Server

Unify comes in desktop and server flavours. The main difference between the two is that the server version allows you to upload Tableau workbooks with OBIEE data to Tableau Server and refresh them. With the desktop version, you will only be able to upload static workbooks that you've created, however with the server version of Unify, you can tell Tableau Server to refresh data from OBIEE in accordance with a schedule. This lets you produce production quality dashboards for your users, sourcing data from OBIEE as a well as any other source you choose.

Unify Your Data

In a nutshell, Unify allows you to combine the best aspects of two very powerful BI tools and will prevent the need for building all of your reporting artefacts from scratch if you already have a good, working system.

I hope you've found this brief introduction to Unify informative and if you have OBIEE and would like to try it with Tableau, I encourage you to register for a free desktop trial. If you have any questions, please don't hesitate to get in touch.

Categories: BI & Warehousing

アドバンテージDXの副作用と注意事項

LogicaCMG - Thu, 2017-06-15 02:12

女性用の媚薬として注目を集めているアドバンテージDXですが、媚薬というと服用して健康を損なう事がないの?そんなふうに心配に思う人もいるでしょう。
ではアドバンテージDXの副作用と使用上の注意事項について紹介します。

アドバンテージDXの副作用について

アドバンテージDXを服用して、副作用が出たという報告は現時点ではありません。
しかしながらアドバンテージDXに配合されている「ヨヒンビン」という成分は媚薬効果が高い成分として注目されていますが、効果が強すぎるため、アメリカでは処方箋がないと入手できない薬に配合されています。
また日本でも医薬品以外の製品に許可なく配合する事は禁じられています。
それぐらい取扱いに注意が必要な成分でもあるため、規定の量を使用する分には問題はありません。
ですがもっと気持ちよくなりたいからと、一度にたくさん過剰摂取をしないように注意して下さい。

アドバンテージDXの注意事項について

アドバンテージDXは副作用はありませんが、ヨヒンビンという成分が効果が強い反面、効果が強すぎるため、健康な人が規定の量を服用する分には問題がありませんが、持病がある人は控えた方がいいでしょう。

どのような人が控えた方がいい人なのかというと、高血圧や心臓疾患、腎臓病、甲状腺異常、精神不安症、うつ病、喘息の人です。
そもそもセックスとは、激しい運動をするのと同じぐらいの運動量になるので、激しい運動はドクターストップがかかっているという人は、上記に挙げた疾患ではない人でもアドバンテージDXの服用は控えるようにして下さい。

あとは疾患以外の人でも、妊娠中や授乳中の人と18歳未満の人も服用するのは控えて下さい。

What Tools Do You Use to Patch EBS 12.2 ORACLE_HOMEs?

Steven Chan - Thu, 2017-06-15 02:00

Oracle E-Business Suite 12.2 has several technology stack components.  Each component has its own ORACLE_HOME:

  • Oracle Fusion Middleware
    • Oracle WebLogic Server (WLS) 10.3.6
    • OHS (WebTier) 11.1.1 and Oracle Common (Common Modules)
  • Application Server (OracleAS)
    • Forms and Reports 10.1.2
  • Oracle Database
    • Oracle Database Server 11.2 or 12.0

Each of these technology stack components can be patched individually.  But what do you use to patch them?  For a quick cheatsheet, see:

References

Related Articles

Categories: APPS Blogs

Numeric data not working which used to work earlier

Tom Kyte - Thu, 2017-06-15 01:46
Hi Tom, We have a package which was running fine from quite a sometime in production. All of the sudden, the report associated to this package got error out. The reason being an and condition in the procedure inside the package i.e. and ct.tran...
Categories: DBA Blogs

unable to use AUTOTRACE in SQL Developer Version 4.2.0.17.089

Tom Kyte - Thu, 2017-06-15 01:46
unable to use AUTOTRACE in SQL Developer Version 4.2.0.17.089 it works fine in sqldeveloper-4.1.0.19.07 with the below enclosed setup GRANT SELECT ON SYS.V_$MYSTAT TO RL_AUTOTRACE; GRANT SELECT ON SYS.V_$SESSION TO RL_AUTOTRACE; GRANT SELECT ON S...
Categories: DBA Blogs

Displaying PDF files stored in the Database

Tom Kyte - Thu, 2017-06-15 01:46
Tom, How can we display PDF files stored in the database, using htp.p (OAS PL/SQL cartridge )calls and dbms_lob ? We are developing a document archival and display app. Using ctxload, I can upload the files. Now, I would like to display the file...
Categories: DBA Blogs

Notes From Orlando

Floyd Teter - Wed, 2017-06-14 17:13
I thought y'all would appreciate some notes from last week's OHUG conference in Orlando Florida.  So, in no particular order, my observations...
  • The sessions were pretty evenly divided between Oracle E-Business, PeopleSoft and HCM Cloud.  Right around 1/3 of total sessions for each track.
  • The mix of attendees, from what I could tell, were about half HCM Cloud users and half on either EBS or PeopleSoft.  And out of those on EBS or Peoplesoft, about half of them were exploring the transformation of moving to HCM Cloud.
  • Attendance for this year's conference seems a little light; maybe down 10 or 15 percent from prior years.  I'm guessing that was caused by a combination of following so closely on the heels of Oracle HCM Cloud World and the fact that it's always tough for a user group conference to get a big draw in Orlando (I don't know why, just know from experience that it's generally true).
  • I did not run into many decision makers this year...very few upper management executives.  But tons of influencers: people who implement and use the products.  I suspect most decision makers are going to Oracle HCM Cloud World while most of those responsible for executing those decisions attend OHUG.
  • A follow on to the prior point.  Attendees were focused on the fundamentals of implementation and use; "how do I do...".  Not many strategic discussions.
  • You couldn't  walk more than 10 feet without encountering a Taleo session or a hallway discussion about Taleo.  Maybe the top topic of the conference.
  • I tried several times to catch Oracle's Justin Challenger, who ran the conference Report Jam this year.  But every time I tried, he was heads down with a group of users designing reports. So I have to thin that the Report Jam was a big hit.
  • Likewise, the UX Lab was abuzz with activity whenever I stopped by there.
  • When the folks in Orlando say they're going to get rain, they're not messing around.  It rained hard...and usually more than once...every day I was there.
  • There may not be anyone who understands global implementations of HCM Cloud better than Oracle's Jon McGoy.  The breadth and depth of the material he presented, plus his off-the-cuff answers to questions, was pretty amazing.
So, overall, I think the OHUG conference is in the midst of a transition.  First, it's becoming more cloud-centric.  You can see it in both the session tracks and the focus of the attendees.  Second, it's become a "how to" type of conference.  More emphasis on using, integrating, and extending the applications.  Less emphasis on strategic decisions.   Third, the type of attendee is changing.  More influencers and users, fewer decision makers (hot tip:  some folks think that's a good thing).

I'm already looking forward to next year's OHUG conference.  Can't wait to see how the transition continues to shake out.

Bash: The most useless command (3)

Dietrich Schroff - Wed, 2017-06-14 12:48
The blog statistics show, that there are many people reading the posts about useless commands. And there is the next candidate, suggested by an anonymous comment:
slThis is my most hated program on a shell. Why?
NAME
sl − display animations aimed to correct users who accidentally enter sl instead of ls.and this program is not interruptable by ctrl-c.
It shows a train running from the left to the right
and blocks your shell for at least 2-3 seconds (depends on the width of your shell window):
$ time sl
real 0m3.347s

Nice Trick to Get ADF LOV Description Text

Andrejus Baranovski - Wed, 2017-06-14 12:29
I will tell you about nice trick to display LOV description. Usually you would create separate attribute in VO for LOV description and base LOV on this attribute (read about it in my previous post - Defining the LOV on a Reference Attribute in Oracle ADF 11g). But there is one more way - it makes it much faster to define LOV on description, but you should be aware about additional SQL statement executed to fetch description text.

You could set converter for ADF UI LOV, and then LOV component would use description by itself, without any additional configuration.

It is important to set correct order for LOV display attributes. Make sure to set description attribute to be first in the list for converter approach to work:


Go to ADF UI LOV component and set converter property. This must point to binding object, converter expression:


What you get - LOV field displays description, converter is able to mask ID value behind it:


It offers nice usability option - you could start typing description, press tab and value will be auto completed (if autoSubmit is set to true):


Behind the scenes it executes LOV SQL to get description text (this SQL is executed on page load too, which is not ideal when you have many LOV fields - in such situation is better to use separate attribute for description in the same VO):


When LOV value is changed and changes are saved, ADF BC updates ID value only (as expected):


Download sample application - ADFLovDescriptionApp.zip.

Register Now for Oracle Code Online - June 20, 2017

OTN TechBlog - Wed, 2017-06-14 12:09

Oracle Code OnlineIf you don't live in or near one of the 20 cities that have hosted, or will host, an Oracle Code event, no worries! Next week Oracle Code Online brings the event to you with 15 expert technical sessions in five tracks, covering database, server-side development, mobile development, full-stack web development, and DevOps and systems. Each session features live Q&A with the session presenters.

Kicking off the event is a special keynote session, Building and Deploying Chatbots and Microservices with Oracle Cloud Platform, by Siddhartha Agarwal, Oracle's VP of Product Management and Strategy.

Register now for this special event:

Full Stack Web Sessions
  • Making RESTful Web Services the Easy Way with Node.js
    by Dan McGhan, Developer Advocate, Oracle
  • Visual JavaScript Development for Everyone
    by Shay Shmeltzer, Director of Product Management, Mobile and Development Tools, Oracle
  • JavaScript Toolkit for Enterprise Applications
    by Oracle ACE Director Andreas Koop, CEO and IT Architect, enpit GmbH and Co. with Janis Krasemann, Developer, enpit GmbH and Co.
Mobile Development Sessions
  • Introduction to Platform Agnostic Hybrid App Development
    by Oracle ACE John Sim, Consultant, Fishbowl Solutions
  • Building Hybrid Mobile Apps with JavaScript and Cordova
    by Chris Muir, Senior Principal Mobility and Cloud Development Tools Product Manager, Oracle
  • The Rise of ChatBots in Enterprise Mobile Application Development
    by Frank Nimphius, Senior Principal Product Manager, Oracle Mobility and Development Tools Group
Register now!

Webcast: Testing E-Business Suite with Flow Builder

Steven Chan - Wed, 2017-06-14 12:00

Oracle University has a large number of free recorded webcasts that EBS sysadmins might find useful.  Here's another good webcast on our testing tools:

Mazen Arar, Senior Director EBS Quality Assurance, provides an overview of how the Quality Assurance team tests Oracle E-Business Suite. It covers main areas that you should consider during functional testing, approaches for new-feature and regression testing, how to reduce the test script generation and execution time, experiences on capturing and presenting metrics to showcase the ROI of testing investment, leveraging automation for testing Oracle E-Business Suite applications and more. This material was presented at Oracle OpenWorld 2016. 

Related Articles

 

Categories: APPS Blogs

Single-Tenant over bequeath connections

Laurent Schneider - Wed, 2017-06-14 10:46

If you follow Oracle recommendation to use SingleTenant CDB instead of Non-CDB, then a lot of things will break.

I won’t go into details, but basically, in this new architecture, you have one core container, called CDB$ROOT, which is the default container where you connect to if you connect locally


sqlplus / as sysdba
SQL> select cdb from v$database;
CDB
---
YES
SQL> select sys_context('USERENV','CON_NAME') from dual;
SYS_CONTEXT('USERENV','CON_NAME')
---------------------------------
CDB$ROOT

SQL> select con_id, NAME from V$CONTAINERS

    CON_ID NAME
---------- ----------
         1 CDB$ROOT
         2 PDB$SEED
         3 ST01

Then you’ll soon realise, you can no longer do what you used to do


SQL> create user u identified by u;
create user u identified by u
            *
ERROR at line 1:
ORA-65096: invalid common user or role name

Some scripts still run in the root container. SHUTDOWN ABORT, ALTER SYSTEM SWITCH LOGFILE. Doing a full backup or clone probably won’t hurt you much. Relatively..

But now let’s imagine I have a very simple and very old script to lock scott


vintage.sh:
  echo "alter user scott account lock;"|
    sqlplus -s / as sysdba

This won’t work. I need to lock scott in the SingleTenant container ST01.

I could do this in the container


SQL> alter session set container=ST01;
  Session altered.
SQL> alter user scott account lock;
  User altered.

So fine, so good. NOW : how do I make this work without changing the script ?

Remember, non-cdb database, as they were used in Oracle 11, 10, 9, 8, 7 … are now deprecated. Remember, cdb is recommended. Now face it : it’ll break your dba scripts.

As a production dba, I don’t want to rewrite all the existing scripts. Some are ultra-old and used by people who did not write them.

One method for my script would be to change the container in a login script.


echo "alter session set container=ST01;" > /tmp/login.sql
export ORACLE_PATH=/tmp
vintage.sh
  Session altered.
  User altered.  

(ORACLE_PATH in latest 12.1 and in 12.2, SQL_PATH in older release)

However, if my script must work with both CDB and non-CDB, I need to set the container in only this case.

In my login.sql, I first tried to implement some plsql logic, but alter session set container is not working (aka working with limitation) with execute immediate.

As well, I don’t want my script to break Oracle 11.

So I decide to do some sqlplus magic with defined variable.


set ver off feed off
-- 1) check if the column v$database.cdb exists
col column_name new_v cdb nopri
def cdb=null
select column_name from dba_tab_columns 
where owner='SYS' and 
table_name='V_$DATABASE' and column_name='CDB';
-- 2) if cdb is YES, then select a dynamic statement using V$PDB 
col pdb new_v pdb nopri
def pdb="null stmt from dual where 1=0"
select 
'''set container="''||name||''"'' stmt from v$pdbs where name!=''PDB$SEED'''
  pdb 
from v$database 
where &cdb='YES';
-- 3) get a dynamic alter session statement. I use a dummy flagger for non-cdb
col stmt new_val stmt nopri
def stmt="SET FLAGGER=OFF"
select &pdb;
-- 4) alter session
alter session &stmt;
set feed 6
col column_name clear
col stmt clear
col pdb clear
undef cdb
undef stmt
undef pdb
del

Now I run my script


11g: ./vintage.sh
User altered.

12c-non-cdb: ./vintage.sh
User altered.

12cR2-single-tenant: ./vintage.sh
User altered.

DISCLAIMER: you shouldn’t use a global login.sql and you should know that secretly fixing old scripts may have side effects. Test, test and retest your code
DISCLAIMER 2: my frequent readers surely wonder if this statement generating a statement generating a statement is for an obfuscation contest

Single-Tenant over bequeath connections

Laurent Schneider - Wed, 2017-06-14 10:46

If you follow Oracle recommendation to use SingleTenant CDB instead of Non-CDB, then a lot of things will break.

I won’t go into details, but basically, in this new architecture, you have one core container, called CDB$ROOT, which is the default container where you connect to if you connect locally


sqlplus / as sysdba
SQL> select cdb from v$database;
CDB
---
YES
SQL> select sys_context('USERENV','CON_NAME') from dual;
SYS_CONTEXT('USERENV','CON_NAME')
---------------------------------
CDB$ROOT

SQL> select con_id, NAME from V$CONTAINERS

    CON_ID NAME
---------- ----------
         1 CDB$ROOT
         2 PDB$SEED
         3 ST01

Then you’ll soon realise, you can no longer do what you used to do


SQL> create user u identified by u;
create user u identified by u
            *
ERROR at line 1:
ORA-65096: invalid common user or role name

Some scripts still run in the root container. SHUTDOWN ABORT, ALTER SYSTEM SWITCH LOGFILE. Doing a full backup or clone probably won’t hurt you much. Relatively..

But now let’s imagine I have a very simple and very old script to lock scott


vintage.sh:
  echo "alter user scott account lock;"|
    sqlplus -s / as sysdba

This won’t work. I need to lock scott in the SingleTenant container ST01.

I could do this in the container


SQL> alter session set container=ST01;
  Session altered.
SQL> alter user scott account lock;
  User altered.

So fine, so good. NOW : how do I make this work without changing the script ?

Remember, non-cdb database, as they were used in Oracle 11, 10, 9, 8, 7 … are now deprecated. Remember, cdb is recommended. Now face it : it’ll break your dba scripts.

As a production dba, I don’t want to rewrite all the existing scripts. Some are ultra-old and used by people who did not write them.

One method for my script would be to change the container in a login script.


echo "alter session set container=ST01;" > /tmp/login.sql
export ORACLE_PATH=/tmp
vintage.sh
  Session altered.
  User altered.  

(ORACLE_PATH in latest 12.1 and in 12.2, SQL_PATH in older release)

However, if my script must work with both CDB and non-CDB, I need to set the container in only this case.

In my login.sql, I first tried to implement some plsql logic, but alter session set container is not working (aka working with limitation) with execute immediate.

As well, I don’t want my script to break Oracle 11.

So I decide to do some sqlplus magic with defined variable.


set ver off feed off
-- 1) check if the column v$database.cdb exists
col column_name new_v cdb nopri
def cdb=null
select column_name from dba_tab_columns 
where owner='SYS' and 
table_name='V_$DATABASE' and column_name='CDB';
-- 2) if cdb is YES, then select a dynamic statement using V$PDB 
col pdb new_v pdb nopri
def pdb="null stmt from dual where 1=0"
select 
'''set container="''||name||''"'' stmt from v$pdbs where name!=''PDB$SEED'''
  pdb 
from v$database 
where &cdb='YES';
-- 3) get a dynamic alter session statement. I use a dummy flagger for non-cdb
col stmt new_val stmt nopri
def stmt="SET FLAGGER=OFF"
select &pdb;
-- 4) alter session
alter session &stmt;
set feed 6
col column_name clear
col stmt clear
col pdb clear
undef cdb
undef stmt
undef pdb
del

Now I run my script


11g: ./vintage.sh
User altered.

12c-non-cdb: ./vintage.sh
User altered.

12cR2-single-tenant: ./vintage.sh
User altered.

DISCLAIMER: you shouldn’t use a global login.sql and you should know that secretly fixing old scripts may have side effects. Test, test and retest your code
DISCLAIMER 2: my frequent readers surely wonder if this statement generating a statement generating a statement is for an obfuscation contest

Unpivot

Jonathan Lewis - Wed, 2017-06-14 09:46

An interesting observation appeared recently as a side-channel on a question on the OTN database forum – how does Oracle execute an unpivot() operation. Here’s an example of such a query:

rem
rem     Script:         unpivot_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2017
rem

create table t1( id, col1, col2, col3, col4, col5, padding )
cache
pctfree 95 pctused 5
-- compress for query low
as
select
        1, 100 , 200 , 300 , 400 , 500,rpad('x',100)
from
        all_objects
where
        rownum <= 50000 ; execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

select 
        /*+ gather_plan_statistics find this */
        id, max(val) as high_val
from
        t1
unpivot include nulls (
        val for source in (col1, col2, col3, col4, col5)
)
group by id
order by id
; 

I’ve created a table with 50,000 rows (all_objects is quite large in 12.1.0.2 and 12.2.0.1), but with lots of free space per block so that I get three rows per block for a total of roughly 16,667 blocks which is going to make it fairly easy to spot any interesting session statistics. Then I’ve used an unpivot() call that has the effect of turning one row with five columns into five rows with one column.

Here’s the basic execution plan for the query (as pulled from memory):


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        | 10695 (100)|      1 |00:00:00.18 |   16671 |       |       |          |
|   1 |  SORT GROUP BY       |      |      1 |      1 | 10695   (2)|      1 |00:00:00.18 |   16671 |  2048 |  2048 | 2048  (0)|
|   2 |   VIEW               |      |      1 |    250K| 10653   (2)|    250K|00:00:01.41 |   16671 |       |       |          |
|   3 |    UNPIVOT           |      |      1 |        |            |    250K|00:00:00.52 |   16671 |       |       |          |
|   4 |     TABLE ACCESS FULL| T1   |      1 |  50000 |  2131   (2)|  50000 |00:00:00.12 |   16671 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

According to the plan Oracle has done one full tablescan on the data (Starts = 1), but the unpivot operation has used this to produce 250,000 rows of output from 50,000 rows (A-Rows=) of input. Note, however, that the cost of the view operation is 5 times the cost of the tablescan but, on the other hand, the number of buffers visited is 16,671 (which matches the size of the table). So have we done 5 tablescans with a union all, or have we done one tablescan ?

The next step is to look at the 10053 (optimizer) trace file, specifically for the “unparsed query” which (despite the final plan table showing the plan we’ve just seen above) looked like this – after applying a few purely cosmetic changes:

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "from$_subquery$_002"."ID" "ID",MAX("from$_subquery$_002"."VAL") "HIGH_VAL" 
FROM    ( 
                   (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL1' "SOURCE","T1"."COL1" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL2' "SOURCE","T1"."COL2" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL3' "SOURCE","T1"."COL3" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL4' "SOURCE","T1"."COL4" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL5' "SOURCE","T1"."COL5" "VAL" FROM "TEST_USER"."T1" "T1")
        ) "from$_subquery$_002" 
GROUP BY "from$_subquery$_002"."ID" 
ORDER BY "from$_subquery$_002"."ID"
;

And then there’s the outline (which I can see in the 10053 trace, or in the plan pulled from memory by a call to dbms_xplan.display_cursor()).


Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$3")
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"SEL$5")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$6")
      NO_ACCESS(@"SEL$6" "from$_subquery$_002"@"SEL$6")
      FULL(@"SEL$5" "T1"@"SEL$5")
      FULL(@"SEL$4" "T1"@"SEL$4")
      FULL(@"SEL$3" "T1"@"SEL$3")
      FULL(@"SEL$2" "T1"@"SEL$2")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

And if I also pull the alias (query block and fully qualified table name) information from memory:


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$6
   2 - SET$1 / from$_subquery$_002@SEL$6
   3 - SET$1 
   4 - SEL$5 / T1@SEL$5

Then there’s the alternative (though not necessarily correct) way to find an “unparsed” version of the query – dbms_sql2.expand_sql_text() in 11g, dbms_utility.expand_sql_text() in 12c:


variable m_sql_out clob

declare
        m_sql_in    clob :=
                'select /*+ gather_plan_statistics find this */ id, max(val) as high_val
                from    t1
                unpivot include nulls ( val for source in (col1, col2, col3, col4, col5) )
                group by id
                order by id
                '
        ;
begin

--      dbms_sql2.expand_sql_text(        -- 11g
        dbms_utility.expand_sql_text(     -- 12c
                m_sql_in,
                :m_sql_out
        );

end;
/

set long 20000
print m_sql_out

M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."ID" "ID",MAX("A1"."VAL") "HIGH_VAL" FROM  ( (SELECT "A3"."ID" "ID",
"A3"."PADDING" "PADDING",'COL1' "SOURCE","A3"."COL1" "VAL" FROM "TEST_USER"."T1"
 "A3") UNION ALL  (SELECT "A4"."ID" "ID","A4"."PADDING" "PADDING",'COL2' "SOURCE
","A4"."COL2" "VAL" FROM "TEST_USER"."T1" "A4") UNION ALL  (SELECT "A5"."ID" "ID
","A5"."PADDING" "PADDING",'COL3' "SOURCE","A5"."COL3" "VAL" FROM "TEST_USER"."T
1" "A5") UNION ALL  (SELECT "A6"."ID" "ID","A6"."PADDING" "PADDING",'COL4' "SOUR
CE","A6"."COL4" "VAL" FROM "TEST_USER"."T1" "A6") UNION ALL  (SELECT "A7"."ID" "
ID","A7"."PADDING" "PADDING",'COL5' "SOURCE","A7"."COL5" "VAL" FROM "TEST_USER".
"T1" "A7")) "A1" GROUP BY "A1"."ID" ORDER BY "A1"."ID"

So at some level the optimizer does think it’s doing 5 tablescans – and the t1 reported in the unpivot plan is from the fifth (i.e. last) copy of the table in the union all. But the execution statistics and the session statistics say I’ve done just one tablescan – and this persists even when I make the table so large that it (a) won’t fit the buffer cache, and/or (b) uses direct path reads, and/or (c) runs under Exadata with Hybrid columnar compression.

So what’s (probably) happening ?

I think Oracle has a code path that says it’s doing a union all of tablescans (at least for this example) but tells it that the union all is there as an expansion of an unpivot so (sticking with an example that does a tablescan into the buffer cache) Oracle reads the number of blocks dictated by the current multiblock read count into the cache, pins that one batch of blocks, scans the batch 5 times (or as required), unpins the batch and then reads the next batch. So the session does five tablescans but does them in a way that lets you see only one tablescan in the statistics.

Footenote

There was a change in the results when I tested this on 12.2.0.1; the unparsed query reported only a two-part union all subquery, and the table alias information in the plan identified the referenced table as the copy of the table from the second subquery in the union all. More significantly the cost of the VIEW operation was a close match to the cost of a single tablescan, rather than being a multiple thereof:


select * from table(dbms_xplan.display_cursor('1k077bzp0t6mn',null,'outline alias cost 

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |  1117 (100)|      1 |00:00:00.13 |    8350 |   8334 |       |       |          |
|   1 |  SORT GROUP BY       |      |      1 |    250K|  1117   (6)|      1 |00:00:00.13 |    8350 |   8334 |  2048 |  2048 | 2048  (0)|
|   2 |   VIEW               |      |      1 |    250K|  1076   (2)|    250K|00:00:00.13 |    8350 |   8334 |       |       |          |
|   3 |    UNPIVOT           |      |      1 |        |            |    250K|00:00:00.09 |    8350 |   8334 |       |       |          |
|   4 |     TABLE ACCESS FULL| T1   |      1 |  50000 |  1074   (2)|  50000 |00:00:00.07 |    8350 |   8334 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$3
   2 - SET$1 / from$_subquery$_002@SEL$3
   3 - SET$1
   4 - SEL$2 / T1@SEL$2

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$3")
      NO_ACCESS(@"SEL$3" "from$_subquery$_002"@"SEL$3")
      FULL(@"SEL$2" "T1"@"SEL$2")
      END_OUTLINE_DATA
  */


M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."ID" "ID",MAX("A1"."VAL") "HIGH_VAL" FROM  (SELECT "A3"."ID" "ID","A
3"."PADDING" "PADDING",'COL1' "SOURCE","A3"."COL1" "VAL" FROM "TEST_USER"."T1" "
A3" UNION ALL SELECT "A3"."ID" "ID","A3"."PADDING" "PADDING",'COL2' "SOURCE","A3
"."COL2" "VAL" FROM "TEST_USER"."T1" "A3" UNION ALL SELECT "A3"."ID" "ID","A3"."
PADDING" "PADDING",'COL3' "SOURCE","A3"."COL3" "VAL" FROM "TEST_USER"."T1" "A3"
UNION ALL SELECT "A3"."ID" "ID","A3"."PADDING" "PADDING",'COL4' "SOURCE","A3"."C
OL4" "VAL" FROM "TEST_USER"."T1" "A3" UNION ALL SELECT "A3"."ID" "ID","A3"."PADD
ING" "PADDING",'COL5' "SOURCE","A3"."COL5" "VAL" FROM "TEST_USER"."T1" "A3") "A1
" GROUP BY "A1"."ID" ORDER BY "A1"."ID"
allstats last'));

It used to be hard enough working out what the run-time engine was doing when the optimizer and the run-time engine claimed that they were working on the same (transformed version of the) SQL; but now we can get the impression that the optimizer is directing the run-time engine to execute a plan that couldn’t possibly match the (unparsed) query that the optimizer had been considering.


The data security mess

DBMS2 - Wed, 2017-06-14 08:21

A large fraction of my briefings this year have included a focus on data security. This is the first year in the past 35 that that’s been true.* I believe that reasons for this trend include:

  • Security is an important aspect of being “enterprise-grade”. Other important checkboxes have been largely filled in. Now it’s security’s turn.
  • A major platform shift, namely to the cloud, is underway or at least being planned for. Security is an important thing to think about as that happens.
  • The cloud even aside, technology trends have created new ways to lose data, which security technology needs to address.
  • Traditionally paranoid industries are still paranoid.
  • Other industries are newly (and rightfully) terrified of exposing customer data.
  • My clients at Cloudera thought they had a chance to get significant messaging leverage from emphasizing security. So far, it seems that they were correct.

*Not really an exception: I did once make it a project to learn about classic network security, including firewall appliances and so on.

Certain security requirements, desires or features keep coming up. These include (and as in many of my lists, these overlap):

  • Easy, comprehensive access control. More on this below.
  • Encryption. If other forms of security were perfect, encryption would never be needed. But they’re not.
  • Auditing. Ideally, auditing can alert you to trouble before (much) damage is done. If not, then it can at least help you do proactive damage control in the face of breach.
  • Whatever regulators mandate.
  • Whatever is generally regarded as best practices. Security “best practices” generally keep enterprises out of legal and regulatory trouble, or at least minimize same. They also keep employees out of legal and career trouble, or minimize same. Hopefully, they even keep data safe.
  • Whatever the government is known to use. This is a common proxy for “best practices”.

More specific or extreme requirements include: 

I don’t know how widely these latter kinds of requirements will spread.

The most confusing part of all this may be access control.

  • Security has a concept called AAA, standing for Authentication, Authorization and Accounting/Auditing/Other things that start with”A”. Yes — even the core acronym in this area is ill-defined.
  • The new standard for authentication is Kerberos. Or maybe it’s SAML (Security Assertion Markup Language). But SAML is actually an old, now-fragmented standard. But it’s also particularly popular in new, cloud use cases. And Kerberos is actually even older than SAML.
  • Suppose we want to deny somebody authorization to access certain raw data, but let them see certain aggregated or derived information. How can we be sure they can’t really see the forbidden underlying data, except through a case-by-case analysis? And if that case-by-case analysis is needed, how can the authorization rules ever be simple?

Further confusing matters, it is an extremely common analytic practice to extract data from somewhere and put it somewhere else to be analyzed. Such extracts are an obvious vector for data breaches, especially when the target system is managed by an individual or IT-weak department. Excel-on-laptops is probably the worst case, but even fat-client BI — both QlikView and Tableau are commonly used with local in-memory data staging — can present substantial security risks. To limit such risks, IT departments are trying to impose new standards and controls on departmental analytics. But IT has been fighting that war for many decades, and it hasn’t won yet.

And that’s all when data is controlled by a single enterprise. Inter-enterprise data sharing confuses things even more. For example, national security breaches in the US tend to come from government contractors more than government employees. (Ed Snowden is the most famous example. Chelsea Manning is the most famous exception.) And as was already acknowledged above, even putting your data under control of a SaaS vendor opens hard-to-plug security holes.

Data security is a real mess.

Categories: Other

Pages

Subscribe to Oracle FAQ aggregator