Skip navigation.

DBA Blogs

Avro MapReduce Jobs in Oozie

Pythian Group - Mon, 2014-10-20 07:56

Normally when using Avro files as input or output to a MapReduce job, you write a Java main[] method to set up the Job using AvroJob. That documentation page does a good job of explaining where to use AvroMappers, AvroReducers, and the AvroKey and AvroValue (N.B. if you want a file full of a particular Avro object, not key-value pair of two Avro types, use AvroKeyOutputWriter as the OutputFormat, AvroKey as the key and NullWritable as the value).

Sometimes (like if you’re using Oozie), you need to set everything up without using AvroJob as a helper. The documentation is less clear here, so here’s a list of Hadoop keys and the appropriate values (for MRv2):

  • avro.schema.output.key - The JSON representation of the output key’s Avro schema. For large objects you may run afoul of Oozie’s 100,000 character workflow limit, in which case you can isolate your Avro job in a subflow
  • avro.schema.output.value – Likewise, if you’re emitting key-value pairs instead of using AvroKeyOutputWriter, put your value’s JSON schema here
  • avro.mapper - your mapper class that extends AvroMapper. You can also use a normal Mapper (with the normal Mapper configuration option), but you’ll have to handle coverting the AvroKey/AvroValue yourself
  • avro.reducer - likewise, a class that extends AvroReducer
  • mapreduce.job.output.key.class - always AvroKey
  • mapreduce.job.output.value.class – AvroValue or NullWritable, as above
  • mapreduce.input.format.class  - if you’re reading Avro files as Input, you’ll need to set this to
  • mapreduce.map.output.key.class - AvroKey, if you’re using a subclass of AvroMapper. If you write your own Mapper, you can pick
  • mapreduce.map.output.value.class - AvroKey or NullWritable, unless you write a Mapper without subclassing AvroMapper
  • io.serializations  – AvroJob set this value to the following:

org.apache.hadoop.io.serializer.WritableSerialization, org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization, org.apache.hadoop.io.serializer.avro.AvroReflectSerialization, org.apache.avro.hadoop.io.AvroSerialization

With these configuration options you should be able to set up an Avro job in Oozie, or any other place where you have to set up your MapReduce job manually.

Categories: DBA Blogs

An Introduction to Extended Data Types in Oracle 12c

Pythian Group - Mon, 2014-10-20 07:55

One of the lesser known new features that comes as a boon to many developers and DBAs is the provision of implicit handling of large data strings using scalar data types like VARCHAR2 and RAW.

When creating tables, each column must be assigned a data type, which determines the nature of the values that can be inserted into the column. Common data types include number, date, and varchar2. These data types are also used to specify the nature of arguments for PL/SQL programs like functions and procedures.

When choosing a data type, you must carefully consider the data you plan to store and the operations you may want to perform upon it. Making good decisions at the table design stage reduces the potential negative downstream impact on space utilization and performance. Space is a consideration since some data types occupy a fixed length, consuming the same number of bytes, no matter what data is actually stored in it.

In pre-12c databases, long characters strings of more than 4000 bytes had to be handled using creative solutions including: CLOB or LONG data types and multiple columns or variables. These approaches led to inefficient unnecessarily complex designs and added processing overheads.

12c introduced the MAX_STRING_SIZE system parameter that allows string data types to be much larger when the parameter is changed from its default value of STANDARD to EXTENDED. The VARCHAR2 data type, stores variable length character data from 1 to 4000 bytes if MAX_STRING_SIZE=STANDARD or up to 32767 bytes if MAX_STRING_SIZE=EXTENDED.

RAW and NVARCHAR2 data types are affected in similar ways.

edt0

Potential issues to consider:

  • Internally, extended data types are stored out-of-line using LOBs, but these cannot be manipulated using the DBMS_LOB interface.
  • When changing the MAX_STRING_SIZE parameter, objects may be updated invalidating dependent objects, so ideally, change this parameter during a maintenance window in your important databases.
  • List partitioning on EDT columns may potentially exceed the 4096 byte limit for the partition bounds. The DEFAULT partition may be used for data values that exceed the 4096 byte limit or a hash function may be used on the data to create unique identifiers smaller than 4096 bytes.
  • Indexing EDT columns may fail with “maximum key length exceeded” errors. For example, databases with an 8k default block size support a maximum key length of approximately 6400 bytes. A suggested work-around is to use a virtual column or function-based index to effectively shorten the index key length.
edt1

 

 

This feature will no doubt be improved and the shortcomings will be dealt with in future releases—but for now, it offers a clean and elegant mechanism for handling large character data within existing applications requiring minimal code changes.

 

 

Categories: DBA Blogs

Learning Spark Lightning-Fast Big Data Analytics by Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia; O'Reilly Media

Surachart Opun - Sat, 2014-10-18 12:45
Apache Spark started as a research project at UC Berkeley in the AMPLab, which focuses on big data analytics. Spark is an open source cluster computing platform designed to be fast and general-purpose for data analytics - It's both fast to run and write. Spark provides primitives for in-memory cluster computing: your job can load data into memory and query it repeatedly much quicker than with disk-based systems like Hadoop MapReduce. Users can write applications quickly in Java, Scala or Python. In additional, it's easy to run standalone or on EC2 or Mesos. It can read data from HDFS, HBase, Cassandra, and any Hadoop data source.
If you would like a book about Spark - Learning Spark Lightning-Fast Big Data Analytics by Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia. It's a great book for who is interested in Spark development and starting with it. Readers will learn how to express MapReduce jobs with just a few simple lines of Spark code and more...
  • Quickly dive into Spark capabilities such as collect, count, reduce, and save
  • Use one programming paradigm instead of mixing and matching tools such as Hive, Hadoop, Mahout, and S4/Storm
  • Learn how to run interactive, iterative, and incremental analyses
  • Integrate with Scala to manipulate distributed datasets like local collections
  • Tackle partitioning issues, data locality, default hash partitioning, user-defined partitioners, and custom serialization
  • Use other languages by means of pipe() to achieve the equivalent of Hadoop streaming
With Early Release - 7 chapters. Explained Apache Spark overview, downloading and commands that should know, programming with RDDS (+ more advance) as well as working with Key-Value Pairs, etc. Easy to read and Good examples in a book. For people who want to learn Apache Spark or use Spark for Data Analytic. It's a book, that should keep in shelf.

Book: Learning Spark Lightning-Fast Big Data Analytics
Authors: Holden KarauAndy KonwinskiPatrick WendellMatei ZahariaWritten By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Bandwidth and Latency

Hemant K Chitale - Sat, 2014-10-18 08:40
Here is, verbatim, an article I posted on Linked-In yesterday  [For other posts on Linked-In, view my Linked-In profile] :

Imagine an 8-lane highway. Now imagine a 4-lane highway. Which has the greater bandwidth ?Imagine your organisation sends its employees on a wekend "retreat" by bus. You have the choice of two locations, one that is 200kilometres away and the other is 80kilometres away. Assume that buses travel at a constant speed of 80kmph. Which resort will your employees get to faster ?The first question is about bandwidth. The second is about latency.(Why should I assume a fixed speed for the buses ? Because, I can assume a fixed speed at which electrons transfer over a wire or photons over a light channel).Expand the question further. What if the organisation needs to send 32 employees in a 40-seater bus. Does it matter that the bus can travel on an 8-lane highway versus a 4-lane highway (assuming minimal other traffic on the highways at that time) ?Too often, naive "architects" do not differentiate between the two. If my organisation needs to configure a standby (DR) location for the key databases and has a choice of two locations but varying types of network services, it should consider *both* bandwidth and latency. If the volume of redo is 1000MBytes per minute and this, factoring overheads for packetizing the "data", translates to 167Mbits per second, should I just go ahead and buy bandwidth of 200Mbits per second ? If the two sites have two different network services providers offering different bandwidths, should I simply locate at the site with the greater bandwidth ? What if the time it takes to synchronously write my data to site "A" is 4ms and the time to site "B" is 8ms ? Should I not factor the latency ? (I am assuming that the "write to disk" speed of hardware at either site is the same -- i.e. the hardware is the same). I can then add the complications of network routers and switches that add to the latency. Software configurations, flow-control mechanisms, QoS definitions and hardware configuration can also impact bandwidth and latency in different ways.Now, extend this to data transfers ("output" or "results") from a database server to an application server or end-user. If the existing link is 100Mbps and is upgraded to 1Gbps, the time to "report" 100 rows is unlikely to change as this time is a function of the latency. However, if the number of concurrent users grows from 10 to 500, the bandwidth requirement may increase and yet each user may still have the same "wait" time to see his results (assuming that there are no server hardware constraints returning results for 500 users).On the flip side, consider ETL servers loading data into a database. Latency is as important as bandwidth. An ETL scheme that does "row-by-row" loads relies on latency, not bandwidth. Increasing bandwidth doesn't help such a scheme.Think about the two.
Categories: DBA Blogs

Debugging High CPU Usage Using Perf Tool and vmcore Analysis

Pythian Group - Fri, 2014-10-17 08:08

There are several tools and technologies available to debug deeper into high CPU utilization in a system; perf, sysrq, oprofile, vmcore, and more. In this post, I will narrate the course of debugging a CPU utilization issue using technologies like perf and vmcore.

Following sar output is from a system which faces high %system usage.

[root@prod-smsgw1 ~]# sar 1 14
Linux 2.6.32-431.20.5.el6.x86_64 (xxxxx) 08/08/2014 _x86_64_ (8 CPU)05:04:57 PM CPU %user %nice %system %iowait %steal %idle
05:04:58 PM all 2.90 0.00 15.01 0.38 0.00 81.72
05:04:59 PM all 2.02 0.00 10.83 0.13 0.00 87.03
05:05:00 PM all 3.27 0.00 13.98 0.76 0.00 81.99
05:05:01 PM all 9.32 0.00 16.62 0.25 0.00 73.80

From ‘man sar’.

%system
Percentage of CPU utilization that occurred while executing at the system level (kernel). Note
that this field includes time spent servicing hardware and software interrupts.

This means that the system is spending considerable time on catering to kernel code. System runs a java application which is showing high CPU usage.

perf – Performance analysis tools for Linux, is a good place to start in these kind of scenarios.

‘perf record’ command would capture system state for all cpus in perf.data file. -g would allow call graph and -p allows profiling a process.

‘perf report’ command would show the report.

Samples: 18K of event ‘cpu-clock’, Event count (approx.): 18445, Thread: java(3284), DSO: [kernel.kallsyms]
58.66% java [k] _spin_lock ?
31.82% java [k] find_inode ?
2.66% java [k] _spin_unlock_irqrestore ?
2.44% java [k] mutex_spin_on_owner

Here we can see that considerable time is spend in spinlock and find_inode code for the java application..

While investigation was going on, system crashed and dumped a vmcore. Vmcore is a memory dump of the system captured by tools like kdump.

I downloaded the debuginfo file and extracted the vmlinux to analyse the vmcore.

# wget wget http://debuginfo.centos.org/6/x86_64/kernel-debuginfo-2.6.32-431.20.5.el6.x86_64.rpm
# rpm2cpio kernel-debuginfo-2.6.32-431.20.5.el6.x86_64.rpm |cpio -idv ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux

Then ran following command.

# crash ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux /var/crash/127.0.0.1-2014-08-07-17\:56\:19/vmcoreKERNEL: ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2014-08-07-17:56:19/vmcore [PARTIAL DUMP]
CPUS: 8
DATE: Thu Aug 7 17:56:17 2014
UPTIME: 1 days, 13:08:01
LOAD AVERAGE: 91.11, 91.54, 98.02
TASKS: 1417
NODENAME: xxxxx
RELEASE: 2.6.32-431.20.5.el6.x86_64
VERSION: #1 SMP Fri Jul 25 08:34:44 UTC 2014
MACHINE: x86_64 (2000 Mhz)
MEMORY: 12 GB
PANIC: “Oops: 0010 [#1] SMP ” (check log for details)
PID: 11233
COMMAND: “java”
TASK: ffff88019706b540 [THREAD_INFO: ffff880037a90000]
CPU: 6
STATE: TASK_RUNNING (PANIC)

From the vmcore I see that dtracedrv module was loaded and unloaded (possibly for running dtrace), this resulted in several warnings (first warning from ftrace is expected) and then kernel panicked as memory got corrupted. Instruction pointer is corrupted, which points to memory corruption. Looks like Panic was triggered by dtrace module.

/tmp/dtrace/linux-master/build-2.6.32-431.20.5.el6.x86_64/driver/dtrace.c:dtrace_ioctl:16858: assertion failure buf->dtb_xamot != cached
Pid: 8442, comm: dtrace Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1
Pid: 3481, comm: java Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1
Call Trace:
[] ? dump_cpu_stack+0x3d/0×50 [dtracedrv]
[] ? generic_smp_call_function_interrupt+0×90/0x1b0
[] ? smp_call_function_interrupt+0×27/0×40
[] ? call_function_interrupt+0×13/0×20
[] ? _spin_lock+0x1e/0×30
[] ? __mark_inode_dirty+0x6c/0×160
[] ? __set_page_dirty_nobuffers+0xdd/0×160
[] ? nfs_mark_request_dirty+0x1a/0×40 [nfs]
[] ? nfs_updatepage+0x3d2/0×560 [nfs]
[] ? nfs_write_end+0×152/0x2b0 [nfs]
[] ? iov_iter_copy_from_user_atomic+0×92/0×130
[] ? generic_file_buffered_write+0x18a/0x2e0
[] ? nfs_refresh_inode_locked+0x3e1/0xbd0 [nfs]
[] ? __generic_file_aio_write+0×260/0×490
[] ? __put_nfs_open_context+0×58/0×110 [nfs]
[] ? dtrace_vcanload+0×20/0x1a0 [dtracedrv]
[..]
BUG: unable to handle kernel paging request at ffffc90014fb415e
IP: [] 0xffffc90014fb415e
PGD 33c2b5067 PUD 33c2b6067 PMD 3e688067 PTE 0
Oops: 0010 [#1] SMP
last sysfs file: /sys/devices/system/node/node0/meminfo
CPU 6
Modules linked in: cpufreq_stats freq_table nfs fscache nfsd lockd nfs_acl auth_rpcgss sunrpc exportfs ipv6 ppdev parport_pc parport microcode vmware_balloon sg vmxnet3 i2c_piix4 i2c_core shpchp ext4 jbd2 mbcache sd_mod crc_t10dif vmw_pvscsi pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: dtracedrv]Pid: 11233, comm: java Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
RIP: 0010:[] [] 0xffffc90014fb415e
RSP: 0018:ffff880037a91f70 EFLAGS: 00010246
RAX: 0000000000000001 RBX: 0000000000000219 RCX: ffff880037a91d40
RDX: 0000000000000001 RSI: 0000000000000001 RDI: 0000000000000000
RBP: 00007fba9a67f4c0 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: 00000000000003ff R12: 000000000001d4c0
R13: 0000000000000219 R14: 00007fb96feb06e0 R15: 00007fb96feb06d8
FS: 00007fb96fec1700(0000) GS:ffff880028380000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffffc90014fb415e CR3: 000000031e49e000 CR4: 00000000000407e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process java (pid: 11233, threadinfo ffff880037a90000, task ffff88019706b540)
Stack:
0000000000000000 0000000000002be1 ffffffff8100b072 0000000000000293
000000000000ebe6 0000000000002be1 0000000000000000 0000000000000007
00000030692df333 000000000001d4c0 0000000000000001 00007fb96feb06d8
Call Trace:
[] ? system_call_fastpath+0×16/0x1b
Code: Bad RIP value.
RIP [] 0xffffc90014fb415e
RSP
CR2: ffffc90014fb415e
crash>

This allowed me to have have a look at the CPU usage issue happening in the system. Other way to capture a vmcore is to manually panic the system using sysrq + c.

None of the runnable and uninterruptable_sleep processes are running for long time..

Looking at the oldest D state process..

crash> bt 4776
PID: 4776 TASK: ffff88027f3daaa0 CPU: 6 COMMAND: “java”
#0 [ffff88027f3dfd88] schedule at ffffffff815287f0
#1 [ffff88027f3dfe50] __mutex_lock_killable_slowpath at ffffffff8152a0ee
#2 [ffff88027f3dfec0] mutex_lock_killable at ffffffff8152a1f8
#3 [ffff88027f3dfee0] vfs_readdir at ffffffff8119f834
#4 [ffff88027f3dff30] sys_getdents at ffffffff8119f9f9
#5 [ffff88027f3dff80] system_call_fastpath at ffffffff8100b072
RIP: 00000030692a90e5 RSP: 00007fa0586c51e0 RFLAGS: 00000206
RAX: 000000000000004e RBX: ffffffff8100b072 RCX: 00007fa0cd2cf000
RDX: 0000000000008000 RSI: 00007fa0bc0de9a8 RDI: 00000000000001f6
RBP: 00007fa0bc004cd0 R8: 00007fa0bc0de9a8 R9: 00007fa0cd2fce58
R10: 00007fa0cd2fcaa8 R11: 0000000000000246 R12: 00007fa0bc004cd0
R13: 00007fa0586c5460 R14: 00007fa0cd2cf1c8 R15: 00007fa0bc0de980
ORIG_RAX: 000000000000004e CS: 0033 SS: 002b

Looking at its stack..

crash> bt -f 4776
PID: 4776 TASK: ffff88027f3daaa0 CPU: 6 COMMAND: “java”
[..]
#2 [ffff88027f3dfec0] mutex_lock_killable at ffffffff8152a1f8
ffff88027f3dfec8: ffff88027f3dfed8 ffff8801401e1600
ffff88027f3dfed8: ffff88027f3dff28 ffffffff8119f834
#3 [ffff88027f3dfee0] vfs_readdir at ffffffff8119f834
ffff88027f3dfee8: ffff88027f3dff08 ffffffff81196826
ffff88027f3dfef8: 00000000000001f6 00007fa0bc0de9a8
ffff88027f3dff08: ffff8801401e1600 0000000000008000
ffff88027f3dff18: 00007fa0bc004cd0 ffffffffffffffa8
ffff88027f3dff28: ffff88027f3dff78 ffffffff8119f9f9
#4 [ffff88027f3dff30] sys_getdents at ffffffff8119f9f9
ffff88027f3dff38: 00007fa0bc0de9a8 0000000000000000
ffff88027f3dff48: 0000000000008000 0000000000000000
ffff88027f3dff58: 00007fa0bc0de980 00007fa0cd2cf1c8
ffff88027f3dff68: 00007fa0586c5460 00007fa0bc004cd0
ffff88027f3dff78: 00007fa0bc004cd0 ffffffff8100b072crash> vfs_readdir
vfs_readdir = $4 =
{int (struct file *, filldir_t, void *)} 0xffffffff8119f7b0
crash>crash> struct file 0xffff8801401e1600
struct file {
f_u = {
fu_list = {
next = 0xffff88033213fce8,
prev = 0xffff88031823d740
},
fu_rcuhead = {
next = 0xffff88033213fce8,
func = 0xffff88031823d740
}
},
f_path = {
mnt = 0xffff880332368080,
dentry = 0xffff8802e2aaae00
},

[..]

crash> mount|grep ffff880332368080
ffff880332368080 ffff88033213fc00 nfs nanas1a.m-qube.com:/vol/test /scratch/test/test.deploy/test/test-internal

The process was waiting while reading from above nfs mount.

Following process seems to the culprit.

crash> bt 9104
PID: 9104 TASK: ffff8803323c8ae0 CPU: 0 COMMAND: “java”
#0 [ffff880028207e90] crash_nmi_callback at ffffffff8102fee6
#1 [ffff880028207ea0] notifier_call_chain at ffffffff8152e435
#2 [ffff880028207ee0] atomic_notifier_call_chain at ffffffff8152e49a
#3 [ffff880028207ef0] notify_die at ffffffff810a11ce
#4 [ffff880028207f20] do_nmi at ffffffff8152c0fb
#5 [ffff880028207f50] nmi at ffffffff8152b9c0
[exception RIP: _spin_lock+30]
RIP: ffffffff8152b22e RSP: ffff88001d209b88 RFLAGS: 00000206
RAX: 0000000000000004 RBX: ffff88005823dd90 RCX: ffff88005823dd78
RDX: 0000000000000000 RSI: ffffffff81fd0820 RDI: ffffffff81fd0820
RBP: ffff88001d209b88 R8: ffff88017b9cfa90 R9: dead000000200200
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88005823dd48
R13: ffff88001d209c68 R14: ffff8803374ba4f8 R15: 0000000000000000
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
— —
#6 [ffff88001d209b88] _spin_lock at ffffffff8152b22e
#7 [ffff88001d209b90] _atomic_dec_and_lock at ffffffff81283095
#8 [ffff88001d209bc0] iput at ffffffff811a5aa0
#9 [ffff88001d209be0] dentry_iput at ffffffff811a26c0
#10 [ffff88001d209c00] d_kill at ffffffff811a2821
#11 [ffff88001d209c20] __shrink_dcache_sb at ffffffff811a2bb6
#12 [ffff88001d209cc0] shrink_dcache_parent at ffffffff811a2f64
#13 [ffff88001d209d30] proc_flush_task at ffffffff811f9195
#14 [ffff88001d209dd0] release_task at ffffffff81074ec8
#15 [ffff88001d209e10] wait_consider_task at ffffffff81075cc6
#16 [ffff88001d209e80] do_wait at ffffffff810760f6
#17 [ffff88001d209ee0] sys_wait4 at ffffffff810762e3
#18 [ffff88001d209f80] system_call_fastpath at ffffffff8100b072

From upstream kernel source..

/**
* iput – put an inode
* @inode: inode to put
*
* Puts an inode, dropping its usage count. If the inode use count hits
* zero, the inode is then freed and may also be destroyed.
*
* Consequently, iput() can sleep.
*/
void iput(struct inode *inode)
{
if (inode) {
BUG_ON(inode->i_state & I_CLEAR);if (atomic_dec_and_lock(&inode->i_count, &inode->i_lock))
iput_final(inode);
}
}
EXPORT_SYMBOL(iput);#include
/**
* atomic_dec_and_lock – lock on reaching reference count zero
* @atomic: the atomic counter
* @lock: the spinlock in question
*
* Decrements @atomic by 1. If the result is 0, returns true and locks
* @lock. Returns false for all other cases.
*/
extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
#define atomic_dec_and_lock(atomic, lock) \
__cond_lock(lock, _atomic_dec_and_lock(atomic, lock))

#endif /* __LINUX_SPINLOCK_H */

Looks like the process was trying to drop dentry cache and was holding to the spinlock while dropping an inode associated with it. This resulted in other processes waiting on spinlock, resulting in high %system utilization.

When the system again showed high %sys usage I checked and found large slab cache.

[root@xxxxx ~]# cat /proc/meminfo
[..]
Slab: 4505788 kB
SReclaimable: 4313672 kB
SUnreclaim: 192116 kB

Checking slab in a running system using slabtop, I saw that nfs_inode_cache is the top consumer.

ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
[..]
2793624 2519618 90% 0.65K 465604 6 1862416K nfs_inode_cache

I ran ‘sync’ and then ‘echo 2 > /proc/sys/vm/drop_caches’ to drop the dcache, which fixed the high %sys usage in the system.

[root@xxxxx ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)11:04:45 AM CPU %user %nice %system %iowait %steal %idle
11:04:46 AM all 1.51 0.00 13.22 0.50 0.00 84.76
11:04:47 AM all 1.25 0.00 12.55 0.13 0.00 86.07
11:04:48 AM all 1.26 0.00 8.83 0.25 0.00 89.66
11:04:49 AM all 1.63 0.00 11.93 0.63 0.00 85.80
^C
[root@xxxxx ~]# sync
[root@xxxxx ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)11:05:23 AM CPU %user %nice %system %iowait %steal %idle
11:05:24 AM all 1.50 0.00 13.03 0.75 0.00 84.71
11:05:25 AM all 1.76 0.00 9.69 0.25 0.00 88.30
11:05:26 AM all 1.51 0.00 9.80 0.25 0.00 88.44
11:05:27 AM all 1.13 0.00 10.03 0.25 0.00 88.60
^C
[root@xxxxx ~]# echo 2 > /proc/sys/vm/drop_caches
[root@xxxxx ~]# cat /proc/meminfo
[..]
Slab: 67660 kB

[root@prod-smsgw4 ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)

11:05:58 AM CPU %user %nice %system %iowait %steal %idle
11:05:59 AM all 1.64 0.00 1.38 0.13 0.00 96.86
11:06:00 AM all 2.64 0.00 1.38 0.38 0.00 95.60
11:06:01 AM all 2.02 0.00 1.89 0.25 0.00 95.84
11:06:02 AM all 2.03 0.00 1.39 4.68 0.00 91.90
11:06:03 AM all 8.21 0.00 2.27 2.65 0.00 86.87
11:06:04 AM all 1.63 0.00 1.38 0.13 0.00 96.86
11:06:05 AM all 2.64 0.00 1.51 0.25 0.00 95.60

From kernel documentation,

drop_cachesWriting to this will cause the kernel to drop clean caches, dentries and
inodes from memory, causing that memory to become free.To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
echo 3 > /proc/sys/vm/drop_caches

java application was traversing through nfs and was accessing large number of files, resulting in large number of nfs_inode_cache entries, resulting in in a large dcache.

Tuning vm.vfs_cache_pressure would be a persistent solution for this.

From kernel documentation,

vfs_cache_pressure
——————Controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects.At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a “fair” rate with respect to pagecache and
swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.

Categories: DBA Blogs

NZOUG14 Beckons

Pythian Group - Fri, 2014-10-17 07:50

New Zealand is famous for Kiwis, pristine landscape, and the New Zealand Oracle User Group (NZOUG) conference.  The location of choice is New Zealand when it comes to making Lord of the Rings and making Oracle Lord of the Databases.

NZOUG 2014 will be held 19–21 November in the Owen G. Glenn Building at the University of Auckland. The main conference will be held on the 20th and 21st, preceded by a day of workshops on the 19th. It’s one of the premier Oracle conferences in Southern hemisphere.

Where there is Oracle, there is Pythian. Pythian will be present in full force in NZOUG 2014.

Following are Pythian sessions at NZOUG14:

12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
Fahd Mirza Chughtai

Everyone Talks About DR – But Why So Few Implement It
Francisco Munoz Alvarez

DBA 101: Calling All New Database Administrators
Gustavo Rene Antunez

My First 100 Days with an Exadata
Gustavo Rene Antunez

Do You Really Know the Index Structures?
Deiby Gómez

Oracle Exadata: Storage Indexes vs Conventional Indexes
Deiby Gómez

Oracle 12c Test Drive
Francisco Munoz Alvarez

Why Use OVM for Oracle Database
Francisco Munoz Alvarez

Please check the full agenda of NZOUG14 here.

Categories: DBA Blogs

Log Buffer #393, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-10-17 07:47

Bloggers get connected to both the databases and their readers through their blogs. Bloggers act like a bridge here. Log Buffer extends this nexus through the Log Buffer Edition.

Oracle:

MS Sharepoint and Oracle APEX integration.

Just a couple of screenshots of sqlplus+rlwrap+cygwin+console.

Say “Big Data” One More Time (I dare you!)

Update OEM Harvester after 12.1.0.4 Upgrade

Insight in the Roadmap for Oracle Cloud Platform Services.

SQL Server:

Troubleshoot SQL P2P replication doesn’t replicate DDL schema change.

Set-based Constraint Violation Reporting in SQL Server.

Where do you start fixing a SQL Server crash when there isn’t a single clue?

A permission gives a principal access to an object to perform certain actions on or with the object.

When you can’t get to your data because another application has it locked, a thorough knowledge of SQL Server concurrency will give you the confidence to decide what to do.

MySQL:

MySQL 5.7.5- More variables in replication performance_schema tables.

Multi-source replication for MySQL has been released as a part of 5.7.5-labs-preview downloadable from labs.mysql.com.

How to install multiple MySQL instances on a single host using MyEnv?

Percona Toolkit for MySQL with MySQL-SSL Connections.

InnoDB: Supporting Page Sizes of 32k and 64k.

Categories: DBA Blogs

NZOUG14 Beckons

Pakistan's First Oracle Blog - Thu, 2014-10-16 19:24
New Zealand is famous for Kiwis, pristine landscape, and New Zealand Oracle User Group (NZOUG) conference.  The location of choice is New Zealand when it comes to making Lord of the Rings and making Oracle Lord of the Databases.


NZOUG 2014 will be held 19–21 November in the Owen G. Glenn Building at the University of Auckland. The main conference will be held on the 20th and 21st, preceded by a day of workshops on the 19th. It's one of the premier Oracle conferences in Southern hemisphere.

Where there is Oracle, there is Pythian. Pythian will be present in full force in NZOUG 2014.

Following are Pythian sessions at NZOUG14:

12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
Fahd Mirza Chughtai

Everyone Talks About DR – But Why So Few Implement It
Francisco Munoz Alvarez

DBA 101: Calling All New Database Administrators
Gustavo Rene Antunez

My First 100 Days with an Exadata
Gustavo Rene Antunez

Do You Really Know the Index Structures?
Deiby Gómez

Oracle Exadata: Storage Indexes vs Conventional Indexes
Deiby Gómez

Oracle 12c Test Drive
Francisco Munoz Alvarez

Why Use OVM for Oracle Database
Francisco Munoz Alvarez
Please check the full agenda of NZOUG14 here.
Categories: DBA Blogs

Partner Webcast – Oracle MAF 12c: Mobile Applications with Oracle Enterprise Pack for Eclipse

Oracle Mobile Platform allows enterprises to easily create and deliver engaging user experiences on one secure platform, for any device, on any application, and any data. Oracle Mobile Application...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 4

Pythian Group - Thu, 2014-10-16 09:11

Today’s blog post is part four of seven in a series dedicated to Deploying Private Cloud at Home, where I will be demonstrating how to configure Imaging and compute services on controller node. See my previous blog post where we began configuring Keystone Identity Service.

  1. Install the Imaging service
    yum install -y openstack-glance python-glanceclient
  2. Configure Glance (Imaging Service) to use MySQL database
    openstack-config --set /etc/glance/glance-api.conf database connection \
    mysql://glance:Your_Password@controller/glance
    openstack-config --set /etc/glance/glance-registry.conf database connection \
    mysql://glance:Youre_Password@controller/glance
  3. Create Glance database user by running below queries on your MySQL prompt as root
    CREATE DATABASE glance;
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'Your_Password';
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'Your_Password';
  4. Create the database tables for the Image Service
    su -s /bin/sh -c "glance-manage db_sync" glance
  5. Create Glance user to communicate to OpenStack services and Identity services
    keystone user-create --name=glance --pass=Your_Password --email=Your_Email
    keystone user-role-add --user=glance --tenant=service --role=admin
  6. Configuration of Glance config files
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password Your_Password
    openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password Your_Password
    openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
  7. Register the Image Service with the Identity service
    keystone service-create --name=glance --type=image --description="OpenStack Image Service"
    keystone endpoint-create \
      --service-id=$(keystone service-list | awk '/ image / {print $2}') \
      --publicurl=http://controller:9292 \
      --internalurl=http://controller:9292 \
      --adminurl=http://controller:9292
  8. Start the Glance-api and Glance-registry services and enable them to start at startup
    service openstack-glance-api start
    service openstack-glance-registry start
    chkconfig openstack-glance-api on
    chkconfig openstack-glance-registry on
  9. Download CirrOS cloud image which is created for testing purpose
    wget -q http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img \
    -O /root/cirros-0.3.2-x86_64-disk.img
  10. Upload the image to Glance using admin account
    source /root/admin-openrc.sh
    glance image-create --name "cirros-0.3.2-x86_64" \
    --disk-format qcow2 \
    --container-format bare \
    --is-public True \
    --progress < /root/cirros-0.3.2-x86_64-disk.img
  11. Install Compute controller service on controller node
    yum install -y openstack-nova-api openstack-nova-cert \
    openstack-nova-conductor openstack-nova-console \
    openstack-nova-novncproxy openstack-nova-scheduler \
    python-novaclient
  12. Configure compute service database
    openstack-config --set /etc/nova/nova.conf database connection mysql://nova:Your_Password@controller/nova
  13. Configure compute service configuration file
    openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
    openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
    openstack-config --set /etc/nova/nova.conf DEFAULT my_ip Controller_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen Controller_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address Controller_IP
  14. Create nova database user by running below queries on your MySQL prompt as root
    CREATE DATABASE nova;
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'Your_Password';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Your_Password';
  15. Create Compute service tables
    su -s /bin/sh -c "nova-manage db sync" nova
  16. Create a nova user that Compute uses to authenticate with the Identity Service
    keystone user-create --name=nova --pass=Your_Passoword --email=Your_Email
    keystone user-role-add --user=nova --tenant=service --role=admin
  17. Configure Compute to use these credentials with the Identity Service running on the controller
    openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password Your_Password
  18. Register Compute with the Identity Service
    keystone service-create --name=nova --type=compute --description="OpenStack Compute"
    keystone endpoint-create \
      --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
      --publicurl=http://controller:8774/v2/%\(tenant_id\)s \
      --internalurl=http://controller:8774/v2/%\(tenant_id\)s \
      --adminurl=http://controller:8774/v2/%\(tenant_id\\)s
  19. Now start Compute services and configure them to start when the system boots
    service openstack-nova-api start
    service openstack-nova-cert start
    service openstack-nova-consoleauth start
    service openstack-nova-scheduler start
    service openstack-nova-conductor start
    service openstack-nova-novncproxy start
    chkconfig openstack-nova-api on
    chkconfig openstack-nova-cert on
    chkconfig openstack-nova-consoleauth on
    chkconfig openstack-nova-scheduler on
    chkconfig openstack-nova-conductor on
    chkconfig openstack-nova-novncproxy on
  20. You can verify your configuration and list available images
    source /root/admin-openrc.sh
    nova image-list

 

This concludes the initial configuration of controller node before configuration of compute node. Stay tuned for part five where I will demonstrate how to configure compute node.

Categories: DBA Blogs

Improve Developer Productivity with SOA 12c

Oracle released SOA Suite 12c (12.1.3) bringing a further integration between components and a bunch of new features. Most of them are quite spectacular, but at the same time all are...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Tweaked bind variable script

Bobby Durrett's DBA Blog - Wed, 2014-10-15 15:17

I modified the bind variable extraction script that I normally use to make it more helpful to me.

Here was my earlier post with the old script: blog post

Here is my updated script:

set termout on 
set echo on
set linesize 32000
set pagesize 1000
set trimspool on

column NAME format a3
column VALUE_STRING format a17

spool bind2.log

select * from 
(select distinct
to_char(sb.LAST_CAPTURED,'YYYY-MM-DD HH24:MI:SS') 
  DATE_TIME,
sb.NAME,
sb.VALUE_STRING 
from 
DBA_HIST_SQLBIND sb
where 
sb.sql_id='gxk0cj3qxug85' and
sb.WAS_CAPTURED='YES')
order by 
DATE_TIME,
NAME;

spool off

Replace gxk0cj3qxug85 with the sql_id of your own query.

The output looks like this (I’ve scrambled the values to obscure production data):

DATE_TIME           NAM VALUE_STRING
------------------- --- -----------------
2014-08-29 11:22:13 :B1 ABC
2014-08-29 11:22:13 :B2 DDDDDD
2014-08-29 11:22:13 :B3 2323
2014-08-29 11:22:13 :B4 555
2014-08-29 11:22:13 :B5 13412341
2014-08-29 11:22:13 :B6 44444
2014-08-29 11:26:47 :B1 gtgadsf
2014-08-29 11:26:47 :B2 adfaf
2014-08-29 11:26:47 :B3 4444
2014-08-29 11:26:47 :B4 5435665
2014-08-29 11:26:47 :B5 4444
2014-08-29 11:26:47 :B6 787

This is better than the original script because it keeps related bind variable values together.

– Bobby


Categories: DBA Blogs

Please look at latest Oct 2014 Oracle patching

Grumpy old DBA - Wed, 2014-10-15 11:23
This one looks like the real thing ... getting advice to "not skip" the patching process for a whole bunch of things included here.

I'm just saying ...
Categories: DBA Blogs

12c: Access Objects Of A Common User Non-existent In Root

Oracle in Action - Tue, 2014-10-14 23:56

RSS content

In a multitenant environment, a common user is a database user whose identity and password are known in the root and in every existing and future pluggable database (PDB). Common users can connect to the root and perform administrative tasks specific to the root or PDBs. There are two types of common users :

  • All Oracle-supplied administrative user accounts, such as SYS and SYSTEM
  •  User created common users- Their names  must start with C## or c##.

When a PDB having a user created common user is plugged into another CDB and the target CDB does not have  a common user with the same name, the common user in a newly plugged in PDB becomes a locked account.
To access such common user’s objects, you can do one of the following:

  • Leave the user account locked and use the objects of its schema.
  • Create a common user with the same name as the locked account.

Let’s demonstrate …

Current scenario:

Source CDB : CDB1
- one PDB (PDB1)
- Two common users C##NXISTS and C##EXISTS

Destination CDB : CDB2
- No PDB
- One common user C##EXISTS

Overview:
- As user C##NXISTS, create and populate a table in PDB1@CDB1
- Unplug PDB1 from CDB1 and plug into CDB2 as PDB1_COPY
- Open PDB1_COPY and Verify that

  •  user C##NXISTS has not been created in root
  • users C##NXISTS and C##EXISTS both have been created in PDB1_COPY. Account of C##EXISTS is open whereas account of C##NXISTS is closed.

- Unlock user C##NXISTS account in PDB1_COPY.
- Try to connect to pdb1_copy as C##NXISTS  – fails with internal error.
- Create a local user  LUSER in PDB1_COPY with privileges on C##NXISTS’  table and verify that LUSER can access C##NXISTS’ table.
- Create user C##NXISTS in root with PDB1_COPY closed. Account of
C##NXISTS is automatically opened on opening PDB1_COPY.
- Try to connect as C##NXISTS to pdb1_copy – succeeds

Implementation:

– Setup –

CDB1>sho con_name

CON_NAME
------------------------------
CDB$ROOT

CDB1>sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1                           READ WRITE NO

CDB1>select username, common from cdb_users where username like 'C##%';

no rows selected

- Create 2 common users in CDB1
    - C##NXISTS
    - C##EXISTS

CDB1>create user C##EXISTS identified by oracle container=all;
     create user C##NXISTS identified by oracle container=all;

     col username for a30
     col common for a10
     select username, common from cdb_users where   username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##NXISTS                      YES
C##EXISTS                      YES
C##NXISTS                      YES
C##EXISTS                      YES

- Create user C##EXISTS  in CDB2

CDB2>sho parameter db_name

NAME                                 TYPE        VALUE
------------------------------------ -----------
db_name                        string      cdb2

CDB2>sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO

CDB2>create user C##EXISTS identified by oracle container=all;
     col username for a30
     col common for a10

     select username, common from cdb_users where username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##EXISTS                      YES

- As user C##NXISTS, create and populate a table in PDB1@CDB1

CDB1>alter session set container=pdb1;
     alter user C##NXISTS quota unlimited on users;
     create table C##NXISTS.test(x number);
     insert into C##NXISTS.test values (1);
     commit;

- Unplug PDB1 from CDB1

CDB1>alter session set container=cdb$root;
     alter pluggable database pdb1 close immediate;
     alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

CDB1>select name from v$datafile where con_id = 3;

NAME
-----------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/example01.dbf

- Plug in PDB1 into CDB2 as PDB1_COPY

CDB2>create pluggable database pdb1_copy using '/home/oracle/pdb1.xml'      file_name_convert =
('/u01/app/oracle/oradata/cdb1/pdb1','/u01/app/oracle/oradata/cdb2/pdb1_copy');

sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1_COPY                      MOUNTED

– Verify that C##NXISTS user is not visible as PDB1_COPY is closed

CDB2>col username for a30
col common for a10
select username, common from cdb_users where username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##EXISTS                      YES

- Open PDB1_COPY and Verify that
  . users C##NXISTS and C##EXISTS both have been created in PDB.
  . Account of C##EXISTS is open whereas account of C##NXISTS is  locked.

CDB2>alter pluggable database pdb1_copy open;
col account_status for a20
select con_id, username, common, account_status from cdb_users  where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
---------- ------------------------------      ----------      --------------------------
1 C##EXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        LOCKED

– Unlock user C##NXISTS account on PDB1_COPY

CDB2>alter session set container = pdb1_copy;
     alter user C##NXISTS account unlock;
     col account_status for a20
     select con_id, username, common, account_status from cdb_users   where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
---------- ------------------------------     -------------  ---------------------------
 3 C##EXISTS                      YES        OPEN
 3 C##NXISTS                      YES        OPEN

– Try to connect as C##NXISTS to pdb1_copy – fails with internal error

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ERROR:
ORA-00600: internal error code, arguments: [kziaVrfyAcctStatinRootCbk: 

!user],
[C##NXISTS], [], [], [], [], [], [], [], [], [], []

- Since user C##NXISTS cannot connect pdb1_copy, we can lock the account again  

CDB2>conn sys/oracle@localhost:1522/pdb1_copy as sysdba
     alter user C##NXISTS account lock;

     col account_status for a20
     select username, common, account_status from dba_users     where username like 'C##%' order by username;

USERNAME                       COMMON     ACCOUNT_STATUS
------------------------------ ---------- --------------------
C##EXISTS                      YES        OPEN
C##NXISTS                      YES        LOCKED

– Now if C##NXISTS tries to log in to PDB1_COPY, ORA-28000 is returned    instead of internal error

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ERROR:
ORA-28000: the account is locked

How to access C##NXISTS objects?

SOLUTION – I

- Create a local user in PDB1_COPY with appropriate object privileges on C##NXISTS’ table

CDB2>conn sys/oracle@localhost:1522/pdb1_copy  as sysdba

     create user luser identified by oracle;
     grant select on c##nxists.test to luser;
     grant create session to luser;

–Check that local user can access common user C##NXISTS tables

CDB2>conn luser/oracle@localhost:1522/pdb1_copy;
     select * from c##nxists.test;
X
----------
1

SOLUTION – II :  Create the common user C##NXISTS in CDB2

- Check that C##NXISTS has not been created in CDB$root

CDB2>conn sys/oracle@cdb2 as sysdba
     col account_status for a20
     select con_id, username, common, account_status from cdb_users    where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
---------- ------------------------------   -------------     -------------------------
1 C##EXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        LOCKED

- Try to create user C##NXISTS with PDB1_COPY open – fails

CDB2>create user c##NXISTS identified by oracle;
create user c##NXISTS identified by oracle
*
ERROR at line 1:
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1_COPY
ORA-01920: user name 'C##NXISTS' conflicts with another user or role  name

- Close PDB1_COPY and Create user C##NXISTS in root and verify that his account is automatically unlocked on opening PDB1_COPY

CDB2>alter pluggable database pdb1_copy close;
     create user c##NXISTS identified by oracle;
     alter pluggable database pdb1_copy open;

     col account_status for a20
     select con_id, username, common, account_status from cdb_users   where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
----------   ------------------------------ ----------      --------------------
1 C##EXISTS                      YES        OPEN
1 C##NXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        OPEN

– Connect to PDB1_COPY as C##NXISTS after granting appropriate privilege – Succeeds

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ERROR:
ORA-01045: user C##NXISTS lacks CREATE SESSION privilege; logon denied
Warning: You are no longer connected to ORACLE.

CDB2>conn sys/oracle@localhost:1522/pdb1_copy as sysdba
     grant create session to c##nxists;
     conn c##nxists/oracle@localhost:1522/pdb1_copy

CDB2>sho con_name

CON_NAME
------------------------------
PDB1_COPY

CDB2>sho user

USER is "C##NXISTS"

CDB2>select * from test;

X
----------
1

References:
http://docs.oracle.com/database/121/DBSEG/users.htm#DBSEG573
———————————————————————————————

Related Links:

Home

Oracle 12c Index

 

—————-



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c: Access Objects Of A Common User Non-existent In Root], All Right Reserved. 2014.

The post 12c: Access Objects Of A Common User Non-existent In Root appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 3

Pythian Group - Tue, 2014-10-14 14:59

Today’s blog post is part three of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to configure OpenStack Identity service on the controller node. We have already configured the required repo in part two of the series, so let’s get started on configuring Keystone Identity Service.

  1. Install keystone on the controller node.
    yum install -y openstack-keystone python-keystoneclient

    OpenStack uses a message broker to coordinate operations and status information among services. The message broker service typically runs on the controller node. OpenStack supports several message brokers including RabbitMQ, Qpid, and ZeroMQ.I am using Qpid as it is available on most of the distros

  2. Install Qpid Messagebroker server.
    yum install -y qpid-cpp-server

    Now Modify the qpid configuration file to disable authentication by changing below line in /etc/qpidd.conf

    auth=no

    Now start and enable qpid service to start on server startup

    chkconfig qpidd on
    service qpidd start
  3. Now configure keystone to use MySQL database
    openstack-config --set /etc/keystone/keystone.conf \
       database connection mysql://keystone:YOUR_PASSWORD@controller/keystone
  4. Next create keystone database user by running below queries on your mysql prompt as root.
    CREATE DATABASE keystone;
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD';
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'YOUR_PASSWORD';
  5. Now create database tables
    su -s /bin/sh -c "keystone-manage db_sync" keystone

    Currently we don’t have any user accounts that can communicate with OpenStack services and Identity service. So we will setup an authorization token to use as a shared secret between the Identity Service and other OpenStack services and store in configuration file.

    ADMIN_TOKEN=$(openssl rand -hex 10)
    echo $ADMIN_TOKEN
    openstack-config --set /etc/keystone/keystone.conf DEFAULT \
       admin_token $ADMIN_TOKEN
  6. Keystone uses PKI tokens as default. Now create the signing keys and certificates to restrict access to the generated data
    keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
    chown -R keystone:keystone /etc/keystone/ssl
    chmod -R o-rwx /etc/keystone/ssl
  7. Start and enable the keystone identity service to begin at startup
    service openstack-keystone start
    chkconfig openstack-keystone on

    Keystone Identity service stores expired tokens as well in the database. We will create below crontab entry to purge the expired tokens

    (crontab -l -u keystone 2>&1 | grep -q token_flush) || \
    echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone
  8. Now we will create admin user for keystone and define roles for admin user
    export OS_SERVICE_TOKEN=$ADMIN_TOKEN
    export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
    keystone user-create --name=admin --pass=Your_Password --email=Your_Email
    keystone role-create --name=admin
    keystone tenant-create --name=admin --description="Admin Tenant"
    keystone user-role-add --user=admin --tenant=admin --role=admin
    keystone user-role-add --user=admin --role=_member_ --tenant=admin
    keystone user-create --name=pythian --pass= Your_Password --email=Your_Email
    keystone tenant-create --name=pythian --description="Pythian Tenant"
    keystone user-role-add --user=pythian --role=_member_ --tenant=pythian
    keystone tenant-create --name=service --description="Service Tenant"
  9. Now we create a service entry for the identity service
    keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
    keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
    --publicurl=http://controller:5000/v2.0 \
    --internalurl=http://controller:5000/v2.0 \
    --adminurl=http://controller:35357/v2.0
  10. Verify Identity service installation
    unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
  11. Request an authentication token by using the admin user and the password you chose for that user
    keystone --os-username=admin --os-password=Your_Password \
      --os-auth-url=http://controller:35357/v2.0 token-get
    keystone --os-username=admin --os-password=Your_Password \
      --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 \
      token-get
  12. We will save the required parameters in admin-openrc.sh as below
    export OS_USERNAME=admin
    export OS_PASSWORD=Your_Password
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://controller:35357/v2.0
  13. Next Next check if everything is working fine and keystone interacts with OpenStack services. We will source the admin-openrc.sh file to load the keystone parameters
    source /root/admin-openrc.sh
  14. List Keystone tokens using:
    keystone token-get
  15. List Keystone users using
    keystone user-list

If all the above commands give you the output, that means your Keystone Identity Service is all set up, and you can proceed to the next steps—In part four, I will discuss on how to configure and set up Image Service to store images.

Categories: DBA Blogs

Oracle E-Business Suite Updates From OpenWorld 2014

Pythian Group - Tue, 2014-10-14 08:29

Oracle OpenWorld has always been my most exciting conference to attend. I always see high energy levels everywhere, and it kind of revs me up to tackle new upcoming technologies. This year I concentrated on attending mostly Oracle E-Business Suite release 12.2 and Oracle 12c Database-related sessions.

On the Oracle E-Business Suite side, I started off with Oracle EBS Customer Advisory Board Meeting with great presentations on new features like the Oracle EBS 12.2.4 new iPad Touch-friendly interface. This can be enabled by setting “Self Service Personal Home Page mode” profile value to “Framework Simplified”. Also discussed some pros and cons of the new downtime mode feature of adop Online patching utility that allows  release update packs ( like 12.2.3 and 12.2.4 patch ) to be applied with out starting up a new online patching session. I will cover more details about that in a separate blog post. In the mean time take a look at the simplified home page look of my 12.2.4 sandbox instance.

Oracle EBS 12.2.4 Simplified Interface

Steven Chan’s presentation on EBS Certification Roadmap announced upcoming support for Android tablets Chrome Browser, IE11 and Oracle Unified Directory etc. Oracle did not extend any support deadlines for Oracle EBS 11i or R12 this time. So to all EBS customers on 11i: It’s time to move to R12.2. I also attended a good session on testing best practices for Oracle E-Business Suite, which had a good slide on some extra testing required during Online Patching Cycle. I am planning to do a separate blog with more details on that, as it is an important piece of information that one might ignore. Also Oracle announced a new product called Flow Builder that is part of Oracle Application Testing Suite, which helps users test functional flows in Oracle EBS.

On the 12c Database side, I attended great sessions by Christian Antognini on Adaptive Query Optimization and Markus Michalewicz sessions on 12c RAC Operational Best Practices and RAC Cache Fusion Internals. Markus Cachefusion presentation has some great recommendations on using _gc_policy_minimum instead of turning off DRM completely using _gc_policy_time=0. Also now there is a way to control DRM of a object using package DBMS_CACHEUTIL.

I also attended attended some new, upcoming technologies that are picking up in the Oracle space like Oracle NoSQL, Oracle Big Data SQL, and Oracle Data Integrator Hadoop connectors. These products seem to have great future ahead and have good chances of becoming mainstream in the data warehousing side of businesses.

Categories: DBA Blogs

Let the Data Guard Broker control LOG_ARCHIVE_* parameters!

The Oracle Instructor - Tue, 2014-10-14 08:20

When using the Data Guard Broker, you don’t need to set any LOG_ARCHIVE_* parameter for the databases that are part of your Data Guard configuration. The broker is doing that for you. Forget about what you may have heard about VALID_FOR – you don’t need that with the broker. Actually, setting any of the LOG_ARCHIVE_* parameters with an enabled broker configuration might even confuse the broker and lead to warning or error messages. Let’s look at a typical example about the redo log transport mode. There is a broker configuration enabled with one primary database prima and one physical standby physt. The broker config files are mirrored on each site and spfiles are in use that the broker (the DMON background process, to be precise) can access:

 OverviewWhen connecting to the broker, you should always connect to a DMON running on the primary site. The only exception from this rule is when you want to do a failover: That must be done connected to the standby site. I will now change the redo log transport mode to sync for the standby database. It helps when you think of the log transport mode as an attribute (respectively a property) of a certain database in your configuration, because that is how the broker sees it also.

 

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated

In this case, physt is a standby database that is receiving redo from primary database prima, which is why the LOG_ARCHIVE_DEST_2 parameter of that primary was changed accordingly:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 30 17:21:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="physt" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Configuration for physt

The mirrored broker configuration files on all involved database servers contain that logxptmode property now. There is no new entry in the spfile of physt required. The present configuration allows now to raise the protection mode:

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.

The next broker command is done to support a switchover later on while keeping the higher protection mode:

DGMGRL> edit database prima set property logxptmode=sync;
Property "logxptmode" updated

Notice that this doesn’t lead to any spfile entry; only the broker config files store that new property. In case of a switchover, prima will then receive redo with sync.

Configuration for primaNow let’s do that switchover and see how the broker ensures automatically that the new primary physt will ship redo to prima:

 

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    prima - Primary database
    physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to physt;
Performing switchover NOW, please wait...
New primary database "physt" is opening...
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "physt"

All I did was the switchover command, and without me specifying any LOG_ARCHIVE* parameter, the broker did it all like this picture shows:

Configuration after switchoverEspecially, now the spfile of the physt database got the new entry:

 

[oracle@uhesse2 ~]$ sqlplus sys/oracle@physt as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:43:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="prima", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="prima" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Not only is it not necessary to specify any of the LOG_ARCHIVE* parameters, it is actually a bad idea to do so. The guideline here is: Let the broker control them! Else it will at least complain about it with warning messages. So as an example what you should not do:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:57:11 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> alter system set log_archive_trace=4096;

System altered.

Although that is the correct syntax, the broker now gets confused, because that parameter setting is not in line with what is in the broker config files. Accordingly that triggers a warning:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with database setting

Fast-Start Failover: DISABLED

Configuration Status:
WARNING

DGMGRL> show database prima statusreport;
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
               prima    WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting

In order to resolve that inconsistency, I will do it also with a broker command – which is what I should have done instead of the alter system command in the first place:

DGMGRL> edit database prima set property LogArchiveTrace=4096;
Property "logarchivetrace" updated
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Thanks to a question from Noons (I really appreciate comments!), let me add the complete list of initialization parameters that the broker is supposed to control. Most but not all is LOG_ARCHIVE*

LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_LAG_TARGET
DB_FILE_NAME_CONVERT
LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_MAX_PROCESSES
LOG_ARCHIVE_MIN_SUCCEED_DEST
LOG_ARCHIVE_TRACE
LOG_FILE_NAME_CONVERT
STANDBY_FILE_MANAGEMENT


Tagged: Data Guard, High Availability
Categories: DBA Blogs

Digital Learning – LVC: It’s the attitude, stupid!

The Oracle Instructor - Mon, 2014-10-13 06:33

The single most important factor for successful digital learning is the attitude both of the instructor as well as of the attendees towards the course format. Delivery of countless Live Virtual Classes (LVCs) for Oracle University made me realize that. There are technical prerequisites of course: A reliable and fast network connection and the usage of a good headset is mandatory, else the participant is doomed from the start. Other prerequisites are the same as for traditional courses: Good course material, working lab environment for hands on practices and last not least knowledgeable instructors. For that part notice that we have the very same courseware, lab environments and instructors like for our classroom courses at Oracle University education centers also for LVCs. The major difference is in your head :-)

Delivering my first couple of LVCs, I felt quite uncomfortable with that new format. Accordingly, my performance was not as good as usual. Meanwhile, I consider the LVC format as totally adequate for my courses and that attitude enables me to deliver them with the same performance as my classroom courses. Actually, they are even better to some degree: I always struggle producing clean sketches with readable handwriting on the whiteboard. Now look at this MS paint sketch from one of my Data Guard LVCs:

Data Guard Real-Time Apply

Data Guard Real-Time Apply

Attendees get all my sketches per email if they like afterwards.

In short: Because I’m happy delivering through LVC today, I’m now able to do it with high quality. The attitude defines the outcome.

Did you ever have a teacher in school that you just disliked for some reason? It was hard to learn anything from that teacher, right? Even if that person was competent.

So this is also true on the side of the attendee: The attitude defines the outcome. If you take an LVC thinking “This cannot work!”, chances are that you are right just because of your mindset. When you attend an LVC with an open mind – even after some initial trouble because you need to familiarize yourself with the learning platform and the way things are presented there – it is much more likely that you will benefit from it. You may even like it better than classroom courses because you can attend from home without the time and expenses it takes to travel :-)

Some common objections against LVC I have heard from customers and my usual responses:

An LVC doesn’t deliver the same amount of interaction like a classroom course!

That is not necessarily so: You are in a small group (mostly less than 10) that is constantly in an audio conference. Unmute yourself and say anything you like, just like in a classroom. Additionally, you have a chatbox available. This is sometimes extremely helpful, especially with non-native speakers in the class :-) You can easily exchange email addresses using the chatbox as well and stay in touch even after the LVC.

I have no appropriate working place to attend an LVC!

You have no appropriate working place at all, then, for something that requires a certain amount of concentration. Talk to your manager about it – maybe there is something like a quiet room available during the LVC.

I cannot keep up the attention when starring the whole day on the computer screen!

Of course not, that is why we have breaks and practices in between the lessons.

Finally, I would love to hear about your thoughts and experiences with online courses! What is your attitude towards Digital Learning?


Tagged: Digital Learning, LVC
Categories: DBA Blogs

Partner Webcast Special Edition – Oracle Mobile Application Framework: Developer Challenge

Win up to $6.000 by developing a mobile application with Oracle Mobile Application Framework! Mobile technology has changed the way that we live and work. As mobile interfaces take the lion’s...

We share our skills to maximize your revenue!
Categories: DBA Blogs