Feed aggregator

Bequeath connect to PDB: set container in logon trigger?

Yann Neuhaus - 11 hours 22 min ago

There are little changes when you go to multitenant architecture and one of them is that you must connect with a service name. You cannot connect directly to a PDB with a beaqueath (aka local) connection. This post is about a workaround you may have in mind: create a common user and set a logon trigger to ‘set container’. I do not recommend it and you should really connect with a service. Here is an example.

Imagine that I have a user connecting with bequeath connection to a non-CDB, using user/password without a connection string, the database being determined by the ORACLE_SID. And I want to migrate to CDB without changing anything on the client connection configuration side. The best idea would be to use a service, explicitly or implicitly with TWO_TASK or LOCAL. But let’s imagine that you don’t want to change anything on the client side.

As we can connect only the the CDB$ROOT with a bequeath connection, we have to create a common user. Because the idea is not to change anything on client configuration, and there’s a very little chance that the user starts with C## I’ll start to remove the mandatory prefix for common users.


SQL> show parameter common_user_prefix
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
common_user_prefix string
 
SQL> alter system set common_user_prefix='' scope=spfile;
System altered.
 
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
...

Then I create my common user:

SQL> create user MYOLDUSER identified by covfefe container=all;
User created.

This user must be able to connect to the CDB:

SQL> grant create session to MYOLDUSER container=current;
Grant succeeded.

And then I want it to switch immediately to PDB1 using a logon trigger:

SQL> create or replace trigger SET_CONTAINER_AT_LOGON after logon on database
2 when (user in ('MYOLDUSER'))
3 begin
4 execute immediate 'alter session set container=PDB1';
5 end;
6 /
Trigger created.

Once on PDB1 this user will have some privileges, and for the example I will grant him a default role:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> create role MYROLE;
Role created.
 
SQL> grant MYROLE to MYOLDUSER container=current;
Grant succeeded.

The documentation says that When you grant a role to a user, the role is granted as a default role for that user and is therefore enabled immediately upon logon so I don’t need to:

SQL> alter user MYOLDUSER default role MYROLE;
User altered.

But the doc say ‘logon’ and technically I do not logon to PDB1. I just set container. However, if you test it you will see that default roles are set also on ‘set container’. And anyway, we cannot set a role in a procedure, neither with ‘set role’ nor with dbms_session.set_role:

ORA-06565: cannot execute SET ROLE from within stored procedure

Then, I can now connect locally to the CDB$ROOT with this user:

SQL> connect MYOLDUSER/covfefe
Connected.

And I’m automatically switched to the PDB1:

SQL> show con_name
 
CON_NAME
------------------------------
PDB1

Issue #1: default roles

However the default roles are not set:

SQL> select * from session_roles;
 
no rows selected

I have to set the role once connected:

SQL> set role all;
Role set.
 
SQL> select * from session_roles;
 
ROLE
--------------------------------------------------------------------------------
MYROLE

This is probably not what we want when we cannot change anything on the application side. This is considered as a bug (Bug 25081564 : ALTER SESSION SET CONTAINER IN “ON LOGON TRIGGER” IS NOT WORKING) fixed in 18.1 (expected in Q1 2018) and there’s a patch for 12.1 and 12.2 https://updates.oracle.com/download/25081564.html

Issue #2: core dump

There’s another issue. If you run the same with SQLcl you have a core dump in the client library libclntsh.so on kpuSetContainerNfy

SQLcl: Release 17.2.0 Production on Tue Aug 22 22:00:52 2017
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
SQL> connect MYOLDUSER/covfefe
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fcaa172faf6, pid=31242, tid=140510230116096
#
# JRE version: Java(TM) SE Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libclntsh.so.12.1+0x11d8af6] kpuSetContainerNfy+0x66
#
# Core dump written. Default location: /media/sf_share/122/blogs/core or core.31242

There’s a SR opened for that. This is not a no-go because the context being no change to the client part, then sqlplus will probably be used. However, that’s another point which shows that ‘set container’ in a logon trigger may have some implementation problems.

Issue #3: security

In my opinion, there is a bigger problem here. With sqlplus (or with sqlcl not using local connection) I can connect to the CDB$ROOT and switch to PDB1. But look at all the commands above… where did I grant the ‘set container’ privilege for MYOLDUSER on the PDB1 container? Nowhere. MYOLDUSER has no create session and no set container privileges, but is able to connect to PDB1 thanks to the logon trigger. Of course , the logon trigger is defined by a DBA who knows what he does. But in my opinion, it is not a good idea to bypass the privilege checking.

So what?

With no default role, connecting without the right privilege, the security model is biased here. And disabling the common user prefix will raise other issues one day with plugging operations. Then, in my opinion, this is not a solution to workaround the need to connect with a service. Especially in the context where we run legacy application with no possibility to change the way it connects: you just postpone the problems to bigger ones later.

The real solution is to connect to a service (and that’s not difficult even when you can’t change the code, with TWO_TASK environment variable).

 

Cet article Bequeath connect to PDB: set container in logon trigger? est apparu en premier sur Blog dbi services.

Performance issues

Tom Kyte - Tue, 2017-08-22 18:46
I'm getting performance issues while running the below query. Its taking 7 minutes to give results Table - V_DM_test_VISIT contains 25 million records table - V_DM_PLANNED_VISIT contains 60 K records I'm using analytical function to find o...
Categories: DBA Blogs

User-Defined Aggregate function with more than 1 argument

Tom Kyte - Tue, 2017-08-22 18:46
Is it possible to create a user-defined aggregate function with more than 1 argument? Let say, I would like to create TOP_NTH(number, integer) function that takes a number (for a column) and integer indicating the ranking of the top value. Such funct...
Categories: DBA Blogs

Unable to set pctversion to 0

Tom Kyte - Tue, 2017-08-22 18:46
Hello Tom, I am trying below code to reset table pctversion to 0 from default 10. but on Oracle 12c 12.1.0.2 it won't work, but the same command works on Oracle 12c 12.2.0.1 and also works on 11gR2. <code> alter table ev_log modify lob (event_...
Categories: DBA Blogs

Performance of querying CHAR columns

Tom Kyte - Tue, 2017-08-22 18:46
I wasn't able to parameterize our queries because our database schema uses a lot of CHAR types on key fields, but was told to use the LIKE operator instead of =. This did allow the parameterized queries to work, but I'm concerned how this will effect...
Categories: DBA Blogs

Remove redundant entries from table

Tom Kyte - Tue, 2017-08-22 18:46
Hi Experts, Please find below table & data. <code>create table orders ( order_id varchar2(10),quantity varchar2(10), price varchar2(10), id varchar2(10)); insert into orders values ('O1','2','100','id1'); insert into orders values ('O1','3'...
Categories: DBA Blogs

Load XML File (Physical file) using SQL Loader into a XMLType Column

Tom Kyte - Tue, 2017-08-22 18:46
Hello Tom I need to load a XML File, which is available on the oracle server using a Host Program in Oracle Applications. I have done the below 1. Created a Host Concurrent Program 2. Call the SQL Loader Control File from the PROG File. But...
Categories: DBA Blogs

ORA-00972: identifier is too long

Tom Kyte - Tue, 2017-08-22 18:46
I have a question about "Identifier too long" error. I understand if I am trying to create a column name that is too long and oracle complains on it. but if I do a select and alais it my own text, why is this a problem ? Here is a sample: <co...
Categories: DBA Blogs

Customizing a CoreOS image/iso: Ignition

Dietrich Schroff - Tue, 2017-08-22 14:48
After the first steps with coreos (virtualbox installation) and a successful ssh login, it is clear, that the coreos image has to be customized. If not, every reboot will clean up every configuration provided after reboot.

CoreOS provides a coreos-install script to do this.
coreos-install -help
Usage: ./coreos-install [-C channel] -d /dev/device
Options:
    -d DEVICE   Install Container Linux to the given device.
    -V VERSION  Version to install (e.g. current) [default: current]
    -B BOARD    Container Linux board to use [default: amd64-usr]
    -C CHANNEL  Release channel to use (e.g. beta) [default: stable]
    -o OEM      OEM type to install (e.g. ami) [default: (none)]
    -c CLOUD    Insert a cloud-init config to be executed on boot.
    -i IGNITION Insert an Ignition config to be executed on boot.
    -b BASEURL  URL to the image mirror (overrides BOARD)
    -k KEYFILE  Override default GPG key for verifying image signature
    -f IMAGE    Install unverified local image file to disk instead of fetching
    -n          Copy generated network units to the root partition.
    -v          Super verbose, for debugging.
    -h          This ;-)

This tool installs CoreOS Container Linux on a block device. If you PXE booted
Container Linux on a machine then use this tool to make a permanent install.
First you have to add a 8GB partition to your core os installation:

 

Then enable ssh login (please refer to this posting). And transfer the coreos-install script and the following ingnition.json file:
{
  "ignition": {
    "version": "2.0.0",
    "config": {}
  },
  "storage": {},
  "systemd": {},
  "networkd": {},
  "passwd": {
    "users": [
      {
        "name": "core",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC....  "
        ]
      }
    ]
  }
}
Please use this validator to check your ignition.json!
And here we go:
core@localhost ~ $ ./coreos-install -d /dev/sda -i ignition.json
./coreos-install: Target block device (/dev/sda) is not writable (are you root?)Ok.
sudo bash
bash-4.3#./coreos-install -d /dev/sda -i ignition.json
Downloading the signature for https://stable.release.core-os.net/amd64-usr/1409.6.0/coreos_production_image.bin.bz2...
wget --no-verbose -O /tmp/coreos-install.JROwzOsi1W/coreos_production_image.bin.bz2.sig https://stable.release.core-os.net/amd64-usr/1409.6.0/coreos_production_image.bin.bz2.sig
2017-08-22 19:01:28 URL:https://stable.release.core-os.net/amd64-usr/1409.6.0/coreos_production_image.bin.bz2.sig [543/543] -> "/tmp/coreos-install.JROwzOsi1W/coreos_production_image.bin.bz2.sig" [1]
Downloading, writing and verifying coreos_production_image.bin.bz2...
2017-08-22 19:02:31 URL:https://stable.release.core-os.net/amd64-usr/1409.6.0/coreos_production_image.bin.bz2 [288249718/288249718] -> "-" [1]
gpg: Signature made Thu Jul  6 01:34:01 2017 UTC
gpg:                using RSA key 07FA9ED31CB5FA26
gpg: key 50E0885593D2DCB4 marked as ultimately trusted
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Good signature from "CoreOS Buildbot (Offical Builds) " [ultimate]
Installing Ignition config ignition.json...
Success! CoreOS Container Linux stable 1409.6.0 is installed on /dev/sda
Then umount the cd drive from your coreos virtual machine:



and reboot:
But this does not work:

Failed to start Ignition (disks)and
See 'systemctl status ignition-disks.service'

Hmmm... I found nothing with google, so i file an issue at github:
https://github.com/coreos/bugs/issues/2118
And got a the solution within 6 minutes!!!

I missed the closing quite in line 14:
"ssh-rsa AAAAB3NzaC....  " And now it worked:


Really cool...

If you are interested in more details please check this site:

CoreOS: structure of the ISO image

Dietrich Schroff - Tue, 2017-08-22 14:40
After some first steps with CoreOS ("First step to micorservices: CoreOS on virtualbox") my problem is, how to customize CoreOS. After each reboot all changes are gone (tmpfs / squashfs).

There are some tutorials which recommend the usage of etcd, but even this has to be configured into the CoreOS linux...

The first idea: what about customizing the ISO image...

mount -o loop coreos_production_iso_image.iso /media/isoInside this:
$ ls
coreos  isolinux  syslinux
du -sh *
276M    coreos
1,9M    isolinux
2,5K    syslinuxand inside the coreos directory:
ls -l coreos
insgesamt 282349
-r--r--r-- 1 root root 253738975 Jul  6 03:47 cpio.gz
-r--r--r-- 1 root root  35385632 Jul  6 03:47 vmlinuzHmmm.
$ gunzip cpio.gz
$ cat cpio | cpio -idmv
.
usr.squashfs
etcThe directory etc is empty. And usr.sqashfs can be mounted:
mount usr.squashfs /media/sqashfs and here we go:
du -sh *
221M    bin
34M    boot
512    lib
240M    lib64
18M    libexec
2,5K    local
0    lost+found
512    postinst
29M    sbin
69M    share
512    tmpI think modifying these files is not really helpful. Although there is no home directory to install my authorized_keys file.

CoreOS itself states:
But i hope there is another way without ignition to modify the coreos. Stay tuned ;-)

Enable Windows Instance access via RDP on Oracle Compute Cloud Service

Oracle Compute Cloud Service optimized for Oracle Public Cloud Machine (OPCM) provides Infrastructure as a Service (IaaS) and also acts as the base on which the other cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Upcoming Webinar, “WebCenter Search that Works!” highlights Oracle WebCenter integration with Mindbreeze InSpire

Earlier this month, Fishbowl announced the release of our Oracle WebCenter Content Connector for Mindbreeze InSpire. The Connector enables the Mindbreeze enterprise search appliance to securely index and serve content stored in WebCenter Content. The Connector also allows customers to leverage the Mindbreeze Search App Designer to embed modern search apps directly in WebCenter Content.

As the quantity of unstructured information continues to expand, content management success depends on the ability to find data in a growing information flood. Without search that works, managed content becomes lost content. By integrating Oracle WebCenter with Mindbreeze InSpire you can improve information discovery, increase user adoption, and encourage content reuse through better search.

In our upcoming webinar, we will provide an overview of the Mindbreeze InSpire enterprise search appliance and our integrations with both WebCenter Content and Portal. We’ll cover what a typical implementation looks like and why customers are making the switch. We’ll also discuss the migration path off deprecated Oracle Secure Enterprise Search and Google Search Appliance technologies, and options for adding other sources like SharePoint and network shares.

We hope you’ll join us.

The post Upcoming Webinar, “WebCenter Search that Works!” highlights Oracle WebCenter integration with Mindbreeze InSpire appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Imanis Data

DBMS2 - Tue, 2017-08-22 07:46

I talked recently with the folks at Imanis Data. For starters:

  • The point of Imanis is to make copies of your databases, for purposes such as backup/restore, test/analysis, or compliance-driven archiving. (That’s in declining order of current customer activity.) Another use is migration via restoring to a different cluster than the one that created the data in the first place.
  • The data can come from NoSQL database managers, from Hadoop, or from Vertica. (Again, that’s in declining order.)
  • As you might imagine, Imanis makes incremental backups; the only full backup is the first one you do for that database.
  • “Imanis” is a new name; the previous name was “Talena”.

Also:

  • Imanis has ~35 subscription customers, a significant majority of which are in the Fortune 1000.
  • Customer industries, in roughly declining order, include:
    • Financial services other than insurance.
    • Insurance.
    • Retail.
    • “Technology”.
  • ~40% of Imanis customers are in the public cloud.
  • Imanis is focused on the North American market at this time.
  • Imanis has ~45 employees.
  • The Imanis product just hit Version 3.

Imanis correctly observes that there are multiple reasons you might want to recover from backup, including:

  • General disaster/system failure.
  • Bug in an application that writes data.
  • Malicious acts, including encryption-by-ransomware.

Imanis uses the phrase “point-in-time backup” to emphasize its flexibility in letting you choose your favorite time-version of your rolling backup.

Imanis also correctly draws the inference that the right backup strategy is some version of:

  • Make backups very frequently. This boils down to “Do a great job of making incremental backups (and restoring from them when necessary). This is where Imanis has spent the bulk of its technical effort to date.
  • In case recovery is needed, identify that last clean (or provably/confidently clean) version of the database and restore from that. The identification part boils down to letting the backup databases be queried directly. That’s largely a roadmap item.
    • Imanis has recently added the capability to build its own functionality querying the backup database.
    • JDBC/whatever general access is still in the future.

Note: When Imanis backups offer direct query access, the possibility will of course exist to use the backup data for general query processing. But while that kind of capability sounds great in theory, I’m not aware of it being a big deal (on technology stacks that already offer it) in practice.

The most technically notable other use cases Imanis mentioned are probably:

  • Data science dataset generation. Imanis lets you generate a partial copy of the database for analytic or test purposes.
    • You can project, select or sample your data, which suggests use of the current query capabilities.
    • There’s an API to let you mask Personally Identifiable Information by writing your own data transformations.
  • Archiving/tiering/ILM (Information Lifecycle Management). Imanis lets you divide data according to its hotness.

Imanis views its competition as:

  • Native utilities of the data stores.
  • Hand-coded scripts.
  • Datos.io, principally in the Cassandra market (so far).

Beyond those, the obvious comparison to Imanis is Delphix. I haven’t spoken with Delphix for a few years, but I believe that key differences between Delphix and Imanis start:

  • Delphix is focused on widely-installed RDBMS such as Oracle.
  • Delphix actually tries to have different production logical copies of your database run off of the same physical copy. Imanis, in contrast, offers technology to help you copy your databases quickly and effectively, but the copies you actually use will indeed be separate from each other.

Imanis software runs on its own cluster, based on hacked Hadoop. A lot of the hacking seems to related to a metadata store, which supports things like:

  • Understanding which (incrementally backed up) blocks need to be pulled together to make a specific copy of the database.
  • Putting data in different places for ILM/tiering.

Another piece of Imanis tech is machine-learning-based anomaly detection.

  • As incrementally backed-up blocks arrive, Imanis flags anomalous ones and states a reason for them.
  • A flag is given a reason.
  • You can denounce the flag as a false alert, and hopefully similar flags won’t be raised in the future.

The technology for this seems rather basic:

  • Random forests for the flagging.
  • No drilldown w/in the Imanis system for follow-up.

But in general concept this is something a lot more systems should be doing.

Most of the rest of Imanis’ tech story is straightforward — support various alternatives for computing platforms, offer the usual security choices, etc. One exception that was new to me was the use of erasure codes, which seem to be a generalization of the concept of parity bits. Allegedly, when used in a storage context these have the near-magical property of offering 4X replication safety with only a 1.5X expansion of data volume. I won’t claim to have understood the subject well enough to see how that could make sense, or what tradeoffs it would entail.

Categories: Other

An Important Change Is Coming for Application Express 5.2 (if you use the Mobile User Interface)

Joel Kallman - Tue, 2017-08-22 07:24
An important change is coming in Oracle Application Express 5.2.  The mobile user interface (based upon jQuery Mobile) will be deprecated.  The deprecation announcement will be included with the forthcoming Application Express 5.1.3 patch set release.

What this means for you
  • If you have applications developed with the mobile user interface based upon jQuery Mobile, they will continue to function in a supported fashion in Application Express 5.1.x.  Premier Support of Oracle Application Express 5.1.x continues until December 2021.
  • New mobile applications should be created using the Desktop User Interface and Universal Theme (Theme 42) and not the Mobile User Interface & jQuery Mobile-based theme (Theme 51).
  • Existing mobile applications should be migrated to the Universal Theme if you want to run them on Oracle Application Express 5.2 and later.

Why is this happening?The last stable version of the jQuery Mobile library was released on October 31, 2014, and it only provides support for jQuery versions 1.8 through 1.11, and jQuery 2.1.  Patches are no longer provided for these versions of jQuery.  Application Express 5.2 will ship with jQuery 3.2 (necessary for us to adopt the latest Oracle JET libraries).  jQuery Mobile will not function properly with jQuery 3.2.

While we will make every effort to devise a way to include the necessary libraries for jQuery-mobile based applications in Application Express 5.2, that may not solve all problems entirely, especially when we try to include the data visualizations via Oracle JET, which have different dependencies.  We do truly strive to ensure backward compatibility of APEX applications across upgrades.  It's of paramount importance to us, and we spend a lot of time and energy ensuring that stability, operation, look and feel remain constant across APEX version upgrades.  But in this specific case, there doesn't appear to be a practical solution.  We're in a box.

There has been some recent activity in a new version of jQuery Mobile, namely 1.5alpha.  But given the fact that there was a very long hiatus on jQuery Mobile for an extended period of time, and it's unlikely that jQuery Mobile 1.5 will be stable enough by the time Application Express 5.2 is released, we collectively decided to announce depreciation of the Mobile User Interface based upon jQuery Mobile.  We literally spent months researching and contemplating this dilemma.  It's unfortunate.  The positive angle, though, is that we're in complete control of the Universal Theme and should not encounter similar issues with Universal Theme-based applications in the future.

DATE output format

Tom Kyte - Tue, 2017-08-22 00:26
Hi, In database_properties table date format is in 'DD-MON-RR', but in table it showing different format in date columns. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Ad...
Categories: DBA Blogs

Database Upgrade / APEX versions

Tom Kyte - Tue, 2017-08-22 00:26
Hi Team, We're planning to upgrade our database from 11.2.0.4 to 12.1.0.2 (non-pluggable configuration). The current version of APEX installed in our environment is 5.1 My question is - during the upgrade process, is the installer smart enou...
Categories: DBA Blogs

Inserts into materialized view tables are parsed each time.

Tom Kyte - Tue, 2017-08-22 00:26
Hi We have a PL/SQL application which is inserting into database tables. These tables have materialized view logs. Noticed when looking into v$sql, that number of executions of these insert statements is much greater then number of parsing ope...
Categories: DBA Blogs

Search for top 10 queries generating huge redo

Tom Kyte - Tue, 2017-08-22 00:26
Dear Experts, Please can you help in knowing how to find for top 10 queries/sessions generating huge redo. I tried with the below queries to check redo generated per day, but not able to find top 10 such sessions which generates so. select t...
Categories: DBA Blogs

use clause optionally enclosed by '"' in sql loader 11.2.0.2.0

Tom Kyte - Tue, 2017-08-22 00:26
I have strange behavior using optionally enclosed by '"' in my .ctl file. it doesn't works as expected and I do understand why. Could you please help me? here the .ctl file: <code>OPTIONS (PARALLEL=TRUE,DIRECT=TRUE,ROWS= '150000',BINDSIZE= '4500...
Categories: DBA Blogs

Unable to extend Tablespace - ORA-01683

Tom Kyte - Tue, 2017-08-22 00:26
Hi Oracle Masters, We run End of Day (EOD) for our bank, and once in a while the EOD process aborts with the following error: ORA-01683: unable to extend index XXXXXX.PK01_TABLE_NAME partition P_1132 by 1024 in tablespace TBLSPCNAME. This kind...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator