Skip navigation.

DBA Blogs

Debugging High CPU Usage Using Perf Tool and vmcore Analysis

Pythian Group - Fri, 2014-10-17 08:08

There are several tools and technologies available to debug deeper into high CPU utilization in a system; perf, sysrq, oprofile, vmcore, and more. In this post, I will narrate the course of debugging a CPU utilization issue using technologies like perf and vmcore.

Following sar output is from a system which faces high %system usage.

[root@prod-smsgw1 ~]# sar 1 14
Linux 2.6.32-431.20.5.el6.x86_64 (xxxxx) 08/08/2014 _x86_64_ (8 CPU)05:04:57 PM CPU %user %nice %system %iowait %steal %idle
05:04:58 PM all 2.90 0.00 15.01 0.38 0.00 81.72
05:04:59 PM all 2.02 0.00 10.83 0.13 0.00 87.03
05:05:00 PM all 3.27 0.00 13.98 0.76 0.00 81.99
05:05:01 PM all 9.32 0.00 16.62 0.25 0.00 73.80

From ‘man sar’.

%system
Percentage of CPU utilization that occurred while executing at the system level (kernel). Note
that this field includes time spent servicing hardware and software interrupts.

This means that the system is spending considerable time on catering to kernel code. System runs a java application which is showing high CPU usage.

perf – Performance analysis tools for Linux, is a good place to start in these kind of scenarios.

‘perf record’ command would capture system state for all cpus in perf.data file. -g would allow call graph and -p allows profiling a process.

‘perf report’ command would show the report.

Samples: 18K of event ‘cpu-clock’, Event count (approx.): 18445, Thread: java(3284), DSO: [kernel.kallsyms]
58.66% java [k] _spin_lock ?
31.82% java [k] find_inode ?
2.66% java [k] _spin_unlock_irqrestore ?
2.44% java [k] mutex_spin_on_owner

Here we can see that considerable time is spend in spinlock and find_inode code for the java application..

While investigation was going on, system crashed and dumped a vmcore. Vmcore is a memory dump of the system captured by tools like kdump.

I downloaded the debuginfo file and extracted the vmlinux to analyse the vmcore.

# wget wget http://debuginfo.centos.org/6/x86_64/kernel-debuginfo-2.6.32-431.20.5.el6.x86_64.rpm
# rpm2cpio kernel-debuginfo-2.6.32-431.20.5.el6.x86_64.rpm |cpio -idv ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux

Then ran following command.

# crash ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux /var/crash/127.0.0.1-2014-08-07-17\:56\:19/vmcoreKERNEL: ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2014-08-07-17:56:19/vmcore [PARTIAL DUMP]
CPUS: 8
DATE: Thu Aug 7 17:56:17 2014
UPTIME: 1 days, 13:08:01
LOAD AVERAGE: 91.11, 91.54, 98.02
TASKS: 1417
NODENAME: xxxxx
RELEASE: 2.6.32-431.20.5.el6.x86_64
VERSION: #1 SMP Fri Jul 25 08:34:44 UTC 2014
MACHINE: x86_64 (2000 Mhz)
MEMORY: 12 GB
PANIC: “Oops: 0010 [#1] SMP ” (check log for details)
PID: 11233
COMMAND: “java”
TASK: ffff88019706b540 [THREAD_INFO: ffff880037a90000]
CPU: 6
STATE: TASK_RUNNING (PANIC)

From the vmcore I see that dtracedrv module was loaded and unloaded (possibly for running dtrace), this resulted in several warnings (first warning from ftrace is expected) and then kernel panicked as memory got corrupted. Instruction pointer is corrupted, which points to memory corruption. Looks like Panic was triggered by dtrace module.

/tmp/dtrace/linux-master/build-2.6.32-431.20.5.el6.x86_64/driver/dtrace.c:dtrace_ioctl:16858: assertion failure buf->dtb_xamot != cached
Pid: 8442, comm: dtrace Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1
Pid: 3481, comm: java Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1
Call Trace:
[] ? dump_cpu_stack+0x3d/0×50 [dtracedrv]
[] ? generic_smp_call_function_interrupt+0×90/0x1b0
[] ? smp_call_function_interrupt+0×27/0×40
[] ? call_function_interrupt+0×13/0×20
[] ? _spin_lock+0x1e/0×30
[] ? __mark_inode_dirty+0x6c/0×160
[] ? __set_page_dirty_nobuffers+0xdd/0×160
[] ? nfs_mark_request_dirty+0x1a/0×40 [nfs]
[] ? nfs_updatepage+0x3d2/0×560 [nfs]
[] ? nfs_write_end+0×152/0x2b0 [nfs]
[] ? iov_iter_copy_from_user_atomic+0×92/0×130
[] ? generic_file_buffered_write+0x18a/0x2e0
[] ? nfs_refresh_inode_locked+0x3e1/0xbd0 [nfs]
[] ? __generic_file_aio_write+0×260/0×490
[] ? __put_nfs_open_context+0×58/0×110 [nfs]
[] ? dtrace_vcanload+0×20/0x1a0 [dtracedrv]
[..]
BUG: unable to handle kernel paging request at ffffc90014fb415e
IP: [] 0xffffc90014fb415e
PGD 33c2b5067 PUD 33c2b6067 PMD 3e688067 PTE 0
Oops: 0010 [#1] SMP
last sysfs file: /sys/devices/system/node/node0/meminfo
CPU 6
Modules linked in: cpufreq_stats freq_table nfs fscache nfsd lockd nfs_acl auth_rpcgss sunrpc exportfs ipv6 ppdev parport_pc parport microcode vmware_balloon sg vmxnet3 i2c_piix4 i2c_core shpchp ext4 jbd2 mbcache sd_mod crc_t10dif vmw_pvscsi pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: dtracedrv]Pid: 11233, comm: java Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
RIP: 0010:[] [] 0xffffc90014fb415e
RSP: 0018:ffff880037a91f70 EFLAGS: 00010246
RAX: 0000000000000001 RBX: 0000000000000219 RCX: ffff880037a91d40
RDX: 0000000000000001 RSI: 0000000000000001 RDI: 0000000000000000
RBP: 00007fba9a67f4c0 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: 00000000000003ff R12: 000000000001d4c0
R13: 0000000000000219 R14: 00007fb96feb06e0 R15: 00007fb96feb06d8
FS: 00007fb96fec1700(0000) GS:ffff880028380000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffffc90014fb415e CR3: 000000031e49e000 CR4: 00000000000407e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process java (pid: 11233, threadinfo ffff880037a90000, task ffff88019706b540)
Stack:
0000000000000000 0000000000002be1 ffffffff8100b072 0000000000000293
000000000000ebe6 0000000000002be1 0000000000000000 0000000000000007
00000030692df333 000000000001d4c0 0000000000000001 00007fb96feb06d8
Call Trace:
[] ? system_call_fastpath+0×16/0x1b
Code: Bad RIP value.
RIP [] 0xffffc90014fb415e
RSP
CR2: ffffc90014fb415e
crash>

This allowed me to have have a look at the CPU usage issue happening in the system. Other way to capture a vmcore is to manually panic the system using sysrq + c.

None of the runnable and uninterruptable_sleep processes are running for long time..

Looking at the oldest D state process..

crash> bt 4776
PID: 4776 TASK: ffff88027f3daaa0 CPU: 6 COMMAND: “java”
#0 [ffff88027f3dfd88] schedule at ffffffff815287f0
#1 [ffff88027f3dfe50] __mutex_lock_killable_slowpath at ffffffff8152a0ee
#2 [ffff88027f3dfec0] mutex_lock_killable at ffffffff8152a1f8
#3 [ffff88027f3dfee0] vfs_readdir at ffffffff8119f834
#4 [ffff88027f3dff30] sys_getdents at ffffffff8119f9f9
#5 [ffff88027f3dff80] system_call_fastpath at ffffffff8100b072
RIP: 00000030692a90e5 RSP: 00007fa0586c51e0 RFLAGS: 00000206
RAX: 000000000000004e RBX: ffffffff8100b072 RCX: 00007fa0cd2cf000
RDX: 0000000000008000 RSI: 00007fa0bc0de9a8 RDI: 00000000000001f6
RBP: 00007fa0bc004cd0 R8: 00007fa0bc0de9a8 R9: 00007fa0cd2fce58
R10: 00007fa0cd2fcaa8 R11: 0000000000000246 R12: 00007fa0bc004cd0
R13: 00007fa0586c5460 R14: 00007fa0cd2cf1c8 R15: 00007fa0bc0de980
ORIG_RAX: 000000000000004e CS: 0033 SS: 002b

Looking at its stack..

crash> bt -f 4776
PID: 4776 TASK: ffff88027f3daaa0 CPU: 6 COMMAND: “java”
[..]
#2 [ffff88027f3dfec0] mutex_lock_killable at ffffffff8152a1f8
ffff88027f3dfec8: ffff88027f3dfed8 ffff8801401e1600
ffff88027f3dfed8: ffff88027f3dff28 ffffffff8119f834
#3 [ffff88027f3dfee0] vfs_readdir at ffffffff8119f834
ffff88027f3dfee8: ffff88027f3dff08 ffffffff81196826
ffff88027f3dfef8: 00000000000001f6 00007fa0bc0de9a8
ffff88027f3dff08: ffff8801401e1600 0000000000008000
ffff88027f3dff18: 00007fa0bc004cd0 ffffffffffffffa8
ffff88027f3dff28: ffff88027f3dff78 ffffffff8119f9f9
#4 [ffff88027f3dff30] sys_getdents at ffffffff8119f9f9
ffff88027f3dff38: 00007fa0bc0de9a8 0000000000000000
ffff88027f3dff48: 0000000000008000 0000000000000000
ffff88027f3dff58: 00007fa0bc0de980 00007fa0cd2cf1c8
ffff88027f3dff68: 00007fa0586c5460 00007fa0bc004cd0
ffff88027f3dff78: 00007fa0bc004cd0 ffffffff8100b072crash> vfs_readdir
vfs_readdir = $4 =
{int (struct file *, filldir_t, void *)} 0xffffffff8119f7b0
crash>crash> struct file 0xffff8801401e1600
struct file {
f_u = {
fu_list = {
next = 0xffff88033213fce8,
prev = 0xffff88031823d740
},
fu_rcuhead = {
next = 0xffff88033213fce8,
func = 0xffff88031823d740
}
},
f_path = {
mnt = 0xffff880332368080,
dentry = 0xffff8802e2aaae00
},

[..]

crash> mount|grep ffff880332368080
ffff880332368080 ffff88033213fc00 nfs nanas1a.m-qube.com:/vol/test /scratch/test/test.deploy/test/test-internal

The process was waiting while reading from above nfs mount.

Following process seems to the culprit.

crash> bt 9104
PID: 9104 TASK: ffff8803323c8ae0 CPU: 0 COMMAND: “java”
#0 [ffff880028207e90] crash_nmi_callback at ffffffff8102fee6
#1 [ffff880028207ea0] notifier_call_chain at ffffffff8152e435
#2 [ffff880028207ee0] atomic_notifier_call_chain at ffffffff8152e49a
#3 [ffff880028207ef0] notify_die at ffffffff810a11ce
#4 [ffff880028207f20] do_nmi at ffffffff8152c0fb
#5 [ffff880028207f50] nmi at ffffffff8152b9c0
[exception RIP: _spin_lock+30]
RIP: ffffffff8152b22e RSP: ffff88001d209b88 RFLAGS: 00000206
RAX: 0000000000000004 RBX: ffff88005823dd90 RCX: ffff88005823dd78
RDX: 0000000000000000 RSI: ffffffff81fd0820 RDI: ffffffff81fd0820
RBP: ffff88001d209b88 R8: ffff88017b9cfa90 R9: dead000000200200
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88005823dd48
R13: ffff88001d209c68 R14: ffff8803374ba4f8 R15: 0000000000000000
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
— —
#6 [ffff88001d209b88] _spin_lock at ffffffff8152b22e
#7 [ffff88001d209b90] _atomic_dec_and_lock at ffffffff81283095
#8 [ffff88001d209bc0] iput at ffffffff811a5aa0
#9 [ffff88001d209be0] dentry_iput at ffffffff811a26c0
#10 [ffff88001d209c00] d_kill at ffffffff811a2821
#11 [ffff88001d209c20] __shrink_dcache_sb at ffffffff811a2bb6
#12 [ffff88001d209cc0] shrink_dcache_parent at ffffffff811a2f64
#13 [ffff88001d209d30] proc_flush_task at ffffffff811f9195
#14 [ffff88001d209dd0] release_task at ffffffff81074ec8
#15 [ffff88001d209e10] wait_consider_task at ffffffff81075cc6
#16 [ffff88001d209e80] do_wait at ffffffff810760f6
#17 [ffff88001d209ee0] sys_wait4 at ffffffff810762e3
#18 [ffff88001d209f80] system_call_fastpath at ffffffff8100b072

From upstream kernel source..

/**
* iput – put an inode
* @inode: inode to put
*
* Puts an inode, dropping its usage count. If the inode use count hits
* zero, the inode is then freed and may also be destroyed.
*
* Consequently, iput() can sleep.
*/
void iput(struct inode *inode)
{
if (inode) {
BUG_ON(inode->i_state & I_CLEAR);if (atomic_dec_and_lock(&inode->i_count, &inode->i_lock))
iput_final(inode);
}
}
EXPORT_SYMBOL(iput);#include
/**
* atomic_dec_and_lock – lock on reaching reference count zero
* @atomic: the atomic counter
* @lock: the spinlock in question
*
* Decrements @atomic by 1. If the result is 0, returns true and locks
* @lock. Returns false for all other cases.
*/
extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
#define atomic_dec_and_lock(atomic, lock) \
__cond_lock(lock, _atomic_dec_and_lock(atomic, lock))

#endif /* __LINUX_SPINLOCK_H */

Looks like the process was trying to drop dentry cache and was holding to the spinlock while dropping an inode associated with it. This resulted in other processes waiting on spinlock, resulting in high %system utilization.

When the system again showed high %sys usage I checked and found large slab cache.

[root@xxxxx ~]# cat /proc/meminfo
[..]
Slab: 4505788 kB
SReclaimable: 4313672 kB
SUnreclaim: 192116 kB

Checking slab in a running system using slabtop, I saw that nfs_inode_cache is the top consumer.

ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
[..]
2793624 2519618 90% 0.65K 465604 6 1862416K nfs_inode_cache

I ran ‘sync’ and then ‘echo 2 > /proc/sys/vm/drop_caches’ to drop the dcache, which fixed the high %sys usage in the system.

[root@xxxxx ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)11:04:45 AM CPU %user %nice %system %iowait %steal %idle
11:04:46 AM all 1.51 0.00 13.22 0.50 0.00 84.76
11:04:47 AM all 1.25 0.00 12.55 0.13 0.00 86.07
11:04:48 AM all 1.26 0.00 8.83 0.25 0.00 89.66
11:04:49 AM all 1.63 0.00 11.93 0.63 0.00 85.80
^C
[root@xxxxx ~]# sync
[root@xxxxx ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)11:05:23 AM CPU %user %nice %system %iowait %steal %idle
11:05:24 AM all 1.50 0.00 13.03 0.75 0.00 84.71
11:05:25 AM all 1.76 0.00 9.69 0.25 0.00 88.30
11:05:26 AM all 1.51 0.00 9.80 0.25 0.00 88.44
11:05:27 AM all 1.13 0.00 10.03 0.25 0.00 88.60
^C
[root@xxxxx ~]# echo 2 > /proc/sys/vm/drop_caches
[root@xxxxx ~]# cat /proc/meminfo
[..]
Slab: 67660 kB

[root@prod-smsgw4 ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)

11:05:58 AM CPU %user %nice %system %iowait %steal %idle
11:05:59 AM all 1.64 0.00 1.38 0.13 0.00 96.86
11:06:00 AM all 2.64 0.00 1.38 0.38 0.00 95.60
11:06:01 AM all 2.02 0.00 1.89 0.25 0.00 95.84
11:06:02 AM all 2.03 0.00 1.39 4.68 0.00 91.90
11:06:03 AM all 8.21 0.00 2.27 2.65 0.00 86.87
11:06:04 AM all 1.63 0.00 1.38 0.13 0.00 96.86
11:06:05 AM all 2.64 0.00 1.51 0.25 0.00 95.60

From kernel documentation,

drop_cachesWriting to this will cause the kernel to drop clean caches, dentries and
inodes from memory, causing that memory to become free.To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
echo 3 > /proc/sys/vm/drop_caches

java application was traversing through nfs and was accessing large number of files, resulting in large number of nfs_inode_cache entries, resulting in in a large dcache.

Tuning vm.vfs_cache_pressure would be a persistent solution for this.

From kernel documentation,

vfs_cache_pressure
——————Controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects.At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a “fair” rate with respect to pagecache and
swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.

Categories: DBA Blogs

NZOUG14 Beckons

Pythian Group - Fri, 2014-10-17 07:50

New Zealand is famous for Kiwis, pristine landscape, and the New Zealand Oracle User Group (NZOUG) conference.  The location of choice is New Zealand when it comes to making Lord of the Rings and making Oracle Lord of the Databases.

NZOUG 2014 will be held 19–21 November in the Owen G. Glenn Building at the University of Auckland. The main conference will be held on the 20th and 21st, preceded by a day of workshops on the 19th. It’s one of the premier Oracle conferences in Southern hemisphere.

Where there is Oracle, there is Pythian. Pythian will be present in full force in NZOUG 2014.

Following are Pythian sessions at NZOUG14:

12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
Fahd Mirza Chughtai

Everyone Talks About DR – But Why So Few Implement It
Francisco Munoz Alvarez

DBA 101: Calling All New Database Administrators
Gustavo Rene Antunez

My First 100 Days with an Exadata
Gustavo Rene Antunez

Do You Really Know the Index Structures?
Deiby Gómez

Oracle Exadata: Storage Indexes vs Conventional Indexes
Deiby Gómez

Oracle 12c Test Drive
Francisco Munoz Alvarez

Why Use OVM for Oracle Database
Francisco Munoz Alvarez

Please check the full agenda of NZOUG14 here.

Categories: DBA Blogs

Log Buffer #393, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-10-17 07:47

Bloggers get connected to both the databases and their readers through their blogs. Bloggers act like a bridge here. Log Buffer extends this nexus through the Log Buffer Edition.

Oracle:

MS Sharepoint and Oracle APEX integration.

Just a couple of screenshots of sqlplus+rlwrap+cygwin+console.

Say “Big Data” One More Time (I dare you!)

Update OEM Harvester after 12.1.0.4 Upgrade

Insight in the Roadmap for Oracle Cloud Platform Services.

SQL Server:

Troubleshoot SQL P2P replication doesn’t replicate DDL schema change.

Set-based Constraint Violation Reporting in SQL Server.

Where do you start fixing a SQL Server crash when there isn’t a single clue?

A permission gives a principal access to an object to perform certain actions on or with the object.

When you can’t get to your data because another application has it locked, a thorough knowledge of SQL Server concurrency will give you the confidence to decide what to do.

MySQL:

MySQL 5.7.5- More variables in replication performance_schema tables.

Multi-source replication for MySQL has been released as a part of 5.7.5-labs-preview downloadable from labs.mysql.com.

How to install multiple MySQL instances on a single host using MyEnv?

Percona Toolkit for MySQL with MySQL-SSL Connections.

InnoDB: Supporting Page Sizes of 32k and 64k.

Categories: DBA Blogs

NZOUG14 Beckons

Pakistan's First Oracle Blog - Thu, 2014-10-16 19:24
New Zealand is famous for Kiwis, pristine landscape, and New Zealand Oracle User Group (NZOUG) conference.  The location of choice is New Zealand when it comes to making Lord of the Rings and making Oracle Lord of the Databases.


NZOUG 2014 will be held 19–21 November in the Owen G. Glenn Building at the University of Auckland. The main conference will be held on the 20th and 21st, preceded by a day of workshops on the 19th. It's one of the premier Oracle conferences in Southern hemisphere.

Where there is Oracle, there is Pythian. Pythian will be present in full force in NZOUG 2014.

Following are Pythian sessions at NZOUG14:

12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
Fahd Mirza Chughtai

Everyone Talks About DR – But Why So Few Implement It
Francisco Munoz Alvarez

DBA 101: Calling All New Database Administrators
Gustavo Rene Antunez

My First 100 Days with an Exadata
Gustavo Rene Antunez

Do You Really Know the Index Structures?
Deiby Gómez

Oracle Exadata: Storage Indexes vs Conventional Indexes
Deiby Gómez

Oracle 12c Test Drive
Francisco Munoz Alvarez

Why Use OVM for Oracle Database
Francisco Munoz Alvarez
Please check the full agenda of NZOUG14 here.
Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 4

Pythian Group - Thu, 2014-10-16 09:11

Today’s blog post is part four of seven in a series dedicated to Deploying Private Cloud at Home, where I will be demonstrating how to configure Imaging and compute services on controller node. See my previous blog post where we began configuring Keystone Identity Service.

  1. Install the Imaging service
    yum install -y openstack-glance python-glanceclient
  2. Configure Glance (Imaging Service) to use MySQL database
    openstack-config --set /etc/glance/glance-api.conf database connection \
    mysql://glance:Your_Password@controller/glance
    openstack-config --set /etc/glance/glance-registry.conf database connection \
    mysql://glance:Youre_Password@controller/glance
  3. Create Glance database user by running below queries on your MySQL prompt as root
    CREATE DATABASE glance;
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'Your_Password';
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'Your_Password';
  4. Create the database tables for the Image Service
    su -s /bin/sh -c "glance-manage db_sync" glance
  5. Create Glance user to communicate to OpenStack services and Identity services
    keystone user-create --name=glance --pass=Your_Password --email=Your_Email
    keystone user-role-add --user=glance --tenant=service --role=admin
  6. Configuration of Glance config files
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password Your_Password
    openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password Your_Password
    openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
  7. Register the Image Service with the Identity service
    keystone service-create --name=glance --type=image --description="OpenStack Image Service"
    keystone endpoint-create \
      --service-id=$(keystone service-list | awk '/ image / {print $2}') \
      --publicurl=http://controller:9292 \
      --internalurl=http://controller:9292 \
      --adminurl=http://controller:9292
  8. Start the Glance-api and Glance-registry services and enable them to start at startup
    service openstack-glance-api start
    service openstack-glance-registry start
    chkconfig openstack-glance-api on
    chkconfig openstack-glance-registry on
  9. Download CirrOS cloud image which is created for testing purpose
    wget -q http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img \
    -O /root/cirros-0.3.2-x86_64-disk.img
  10. Upload the image to Glance using admin account
    source /root/admin-openrc.sh
    glance image-create --name "cirros-0.3.2-x86_64" \
    --disk-format qcow2 \
    --container-format bare \
    --is-public True \
    --progress < /root/cirros-0.3.2-x86_64-disk.img
  11. Install Compute controller service on controller node
    yum install -y openstack-nova-api openstack-nova-cert \
    openstack-nova-conductor openstack-nova-console \
    openstack-nova-novncproxy openstack-nova-scheduler \
    python-novaclient
  12. Configure compute service database
    openstack-config --set /etc/nova/nova.conf database connection mysql://nova:Your_Password@controller/nova
  13. Configure compute service configuration file
    openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
    openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
    openstack-config --set /etc/nova/nova.conf DEFAULT my_ip Controller_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen Controller_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address Controller_IP
  14. Create nova database user by running below queries on your MySQL prompt as root
    CREATE DATABASE nova;
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'Your_Password';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Your_Password';
  15. Create Compute service tables
    su -s /bin/sh -c "nova-manage db sync" nova
  16. Create a nova user that Compute uses to authenticate with the Identity Service
    keystone user-create --name=nova --pass=Your_Passoword --email=Your_Email
    keystone user-role-add --user=nova --tenant=service --role=admin
  17. Configure Compute to use these credentials with the Identity Service running on the controller
    openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password Your_Password
  18. Register Compute with the Identity Service
    keystone service-create --name=nova --type=compute --description="OpenStack Compute"
    keystone endpoint-create \
      --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
      --publicurl=http://controller:8774/v2/%\(tenant_id\)s \
      --internalurl=http://controller:8774/v2/%\(tenant_id\)s \
      --adminurl=http://controller:8774/v2/%\(tenant_id\\)s
  19. Now start Compute services and configure them to start when the system boots
    service openstack-nova-api start
    service openstack-nova-cert start
    service openstack-nova-consoleauth start
    service openstack-nova-scheduler start
    service openstack-nova-conductor start
    service openstack-nova-novncproxy start
    chkconfig openstack-nova-api on
    chkconfig openstack-nova-cert on
    chkconfig openstack-nova-consoleauth on
    chkconfig openstack-nova-scheduler on
    chkconfig openstack-nova-conductor on
    chkconfig openstack-nova-novncproxy on
  20. You can verify your configuration and list available images
    source /root/admin-openrc.sh
    nova image-list

 

This concludes the initial configuration of controller node before configuration of compute node. Stay tuned for part five where I will demonstrate how to configure compute node.

Categories: DBA Blogs

Tweaked bind variable script

Bobby Durrett's DBA Blog - Wed, 2014-10-15 15:17

I modified the bind variable extraction script that I normally use to make it more helpful to me.

Here was my earlier post with the old script: blog post

Here is my updated script:

set termout on 
set echo on
set linesize 32000
set pagesize 1000
set trimspool on

column NAME format a3
column VALUE_STRING format a17

spool bind2.log

select * from 
(select distinct
to_char(sb.LAST_CAPTURED,'YYYY-MM-DD HH24:MI:SS') 
  DATE_TIME,
sb.NAME,
sb.VALUE_STRING 
from 
DBA_HIST_SQLBIND sb
where 
sb.sql_id='gxk0cj3qxug85' and
sb.WAS_CAPTURED='YES')
order by 
DATE_TIME,
NAME;

spool off

Replace gxk0cj3qxug85 with the sql_id of your own query.

The output looks like this (I’ve scrambled the values to obscure production data):

DATE_TIME           NAM VALUE_STRING
------------------- --- -----------------
2014-08-29 11:22:13 :B1 ABC
2014-08-29 11:22:13 :B2 DDDDDD
2014-08-29 11:22:13 :B3 2323
2014-08-29 11:22:13 :B4 555
2014-08-29 11:22:13 :B5 13412341
2014-08-29 11:22:13 :B6 44444
2014-08-29 11:26:47 :B1 gtgadsf
2014-08-29 11:26:47 :B2 adfaf
2014-08-29 11:26:47 :B3 4444
2014-08-29 11:26:47 :B4 5435665
2014-08-29 11:26:47 :B5 4444
2014-08-29 11:26:47 :B6 787

This is better than the original script because it keeps related bind variable values together.

– Bobby


Categories: DBA Blogs

Please look at latest Oct 2014 Oracle patching

Grumpy old DBA - Wed, 2014-10-15 11:23
This one looks like the real thing ... getting advice to "not skip" the patching process for a whole bunch of things included here.

I'm just saying ...
Categories: DBA Blogs

12c: Access Objects Of A Common User Non-existent In Root

Oracle in Action - Tue, 2014-10-14 23:56

RSS content

In a multitenant environment, a common user is a database user whose identity and password are known in the root and in every existing and future pluggable database (PDB). Common users can connect to the root and perform administrative tasks specific to the root or PDBs. There are two types of common users :

  • All Oracle-supplied administrative user accounts, such as SYS and SYSTEM
  •  User created common users- Their names  must start with C## or c##.

When a PDB having a user created common user is plugged into another CDB and the target CDB does not have  a common user with the same name, the common user in a newly plugged in PDB becomes a locked account.
To access such common user’s objects, you can do one of the following:

  • Leave the user account locked and use the objects of its schema.
  • Create a common user with the same name as the locked account.

Let’s demonstrate …

Current scenario:

Source CDB : CDB1
- one PDB (PDB1)
- Two common users C##NXISTS and C##EXISTS

Destination CDB : CDB2
- No PDB
- One common user C##EXISTS

Overview:
- As user C##NXISTS, create and populate a table in PDB1@CDB1
- Unplug PDB1 from CDB1 and plug into CDB2 as PDB1_COPY
- Open PDB1_COPY and Verify that

  •  user C##NXISTS has not been created in root
  • users C##NXISTS and C##EXISTS both have been created in PDB1_COPY. Account of C##EXISTS is open whereas account of C##NXISTS is closed.

- Unlock user C##NXISTS account in PDB1_COPY.
- Try to connect to pdb1_copy as C##NXISTS  – fails with internal error.
- Create a local user  LUSER in PDB1_COPY with privileges on C##NXISTS’  table and verify that LUSER can access C##NXISTS’ table.
- Create user C##NXISTS in root with PDB1_COPY closed. Account of
C##NXISTS is automatically opened on opening PDB1_COPY.
- Try to connect as C##NXISTS to pdb1_copy – succeeds

Implementation:

– Setup –

CDB1>sho con_name

CON_NAME
------------------------------
CDB$ROOT

CDB1>sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1                           READ WRITE NO

CDB1>select username, common from cdb_users where username like 'C##%';

no rows selected

- Create 2 common users in CDB1
    - C##NXISTS
    - C##EXISTS

CDB1>create user C##EXISTS identified by oracle container=all;
     create user C##NXISTS identified by oracle container=all;

     col username for a30
     col common for a10
     select username, common from cdb_users where   username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##NXISTS                      YES
C##EXISTS                      YES
C##NXISTS                      YES
C##EXISTS                      YES

- Create user C##EXISTS  in CDB2

CDB2>sho parameter db_name

NAME                                 TYPE        VALUE
------------------------------------ -----------
db_name                        string      cdb2

CDB2>sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO

CDB2>create user C##EXISTS identified by oracle container=all;
     col username for a30
     col common for a10

     select username, common from cdb_users where username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##EXISTS                      YES

- As user C##NXISTS, create and populate a table in PDB1@CDB1

CDB1>alter session set container=pdb1;
     alter user C##NXISTS quota unlimited on users;
     create table C##NXISTS.test(x number);
     insert into C##NXISTS.test values (1);
     commit;

- Unplug PDB1 from CDB1

CDB1>alter session set container=cdb$root;
     alter pluggable database pdb1 close immediate;
     alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

CDB1>select name from v$datafile where con_id = 3;

NAME
-----------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/example01.dbf

- Plug in PDB1 into CDB2 as PDB1_COPY

CDB2>create pluggable database pdb1_copy using '/home/oracle/pdb1.xml'      file_name_convert =
('/u01/app/oracle/oradata/cdb1/pdb1','/u01/app/oracle/oradata/cdb2/pdb1_copy');

sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1_COPY                      MOUNTED

– Verify that C##NXISTS user is not visible as PDB1_COPY is closed

CDB2>col username for a30
col common for a10
select username, common from cdb_users where username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##EXISTS                      YES

- Open PDB1_COPY and Verify that
  . users C##NXISTS and C##EXISTS both have been created in PDB.
  . Account of C##EXISTS is open whereas account of C##NXISTS is  locked.

CDB2>alter pluggable database pdb1_copy open;
col account_status for a20
select con_id, username, common, account_status from cdb_users  where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
---------- ------------------------------      ----------      --------------------------
1 C##EXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        LOCKED

– Unlock user C##NXISTS account on PDB1_COPY

CDB2>alter session set container = pdb1_copy;
     alter user C##NXISTS account unlock;
     col account_status for a20
     select con_id, username, common, account_status from cdb_users   where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
---------- ------------------------------     -------------  ---------------------------
 3 C##EXISTS                      YES        OPEN
 3 C##NXISTS                      YES        OPEN

– Try to connect as C##NXISTS to pdb1_copy – fails with internal error

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ERROR:
ORA-00600: internal error code, arguments: [kziaVrfyAcctStatinRootCbk: 

!user],
[C##NXISTS], [], [], [], [], [], [], [], [], [], []

- Since user C##NXISTS cannot connect pdb1_copy, we can lock the account again  

CDB2>conn sys/oracle@localhost:1522/pdb1_copy as sysdba
     alter user C##NXISTS account lock;

     col account_status for a20
     select username, common, account_status from dba_users     where username like 'C##%' order by username;

USERNAME                       COMMON     ACCOUNT_STATUS
------------------------------ ---------- --------------------
C##EXISTS                      YES        OPEN
C##NXISTS                      YES        LOCKED

– Now if C##NXISTS tries to log in to PDB1_COPY, ORA-28000 is returned    instead of internal error

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ERROR:
ORA-28000: the account is locked

How to access C##NXISTS objects?

SOLUTION – I

- Create a local user in PDB1_COPY with appropriate object privileges on C##NXISTS’ table

CDB2>conn sys/oracle@localhost:1522/pdb1_copy  as sysdba

     create user luser identified by oracle;
     grant select on c##nxists.test to luser;
     grant create session to luser;

–Check that local user can access common user C##NXISTS tables

CDB2>conn luser/oracle@localhost:1522/pdb1_copy;
     select * from c##nxists.test;
X
----------
1

SOLUTION – II :  Create the common user C##NXISTS in CDB2

- Check that C##NXISTS has not been created in CDB$root

CDB2>conn sys/oracle@cdb2 as sysdba
     col account_status for a20
     select con_id, username, common, account_status from cdb_users    where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
---------- ------------------------------   -------------     -------------------------
1 C##EXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        LOCKED

- Try to create user C##NXISTS with PDB1_COPY open – fails

CDB2>create user c##NXISTS identified by oracle;
create user c##NXISTS identified by oracle
*
ERROR at line 1:
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1_COPY
ORA-01920: user name 'C##NXISTS' conflicts with another user or role  name

- Close PDB1_COPY and Create user C##NXISTS in root and verify that his account is automatically unlocked on opening PDB1_COPY

CDB2>alter pluggable database pdb1_copy close;
     create user c##NXISTS identified by oracle;
     alter pluggable database pdb1_copy open;

     col account_status for a20
     select con_id, username, common, account_status from cdb_users   where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
----------   ------------------------------ ----------      --------------------
1 C##EXISTS                      YES        OPEN
1 C##NXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        OPEN

– Connect to PDB1_COPY as C##NXISTS after granting appropriate privilege – Succeeds

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ERROR:
ORA-01045: user C##NXISTS lacks CREATE SESSION privilege; logon denied
Warning: You are no longer connected to ORACLE.

CDB2>conn sys/oracle@localhost:1522/pdb1_copy as sysdba
     grant create session to c##nxists;
     conn c##nxists/oracle@localhost:1522/pdb1_copy

CDB2>sho con_name

CON_NAME
------------------------------
PDB1_COPY

CDB2>sho user

USER is "C##NXISTS"

CDB2>select * from test;

X
----------
1

References:
http://docs.oracle.com/database/121/DBSEG/users.htm#DBSEG573
———————————————————————————————

Related Links:

Home

Oracle 12c Index

 

—————-



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c: Access Objects Of A Common User Non-existent In Root], All Right Reserved. 2014.

The post 12c: Access Objects Of A Common User Non-existent In Root appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 3

Pythian Group - Tue, 2014-10-14 14:59

Today’s blog post is part three of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to configure OpenStack Identity service on the controller node. We have already configured the required repo in part two of the series, so let’s get started on configuring Keystone Identity Service.

  1. Install keystone on the controller node.
    yum install -y openstack-keystone python-keystoneclient

    OpenStack uses a message broker to coordinate operations and status information among services. The message broker service typically runs on the controller node. OpenStack supports several message brokers including RabbitMQ, Qpid, and ZeroMQ.I am using Qpid as it is available on most of the distros

  2. Install Qpid Messagebroker server.
    yum install -y qpid-cpp-server

    Now Modify the qpid configuration file to disable authentication by changing below line in /etc/qpidd.conf

    auth=no

    Now start and enable qpid service to start on server startup

    chkconfig qpidd on
    service qpidd start
  3. Now configure keystone to use MySQL database
    openstack-config --set /etc/keystone/keystone.conf \
       database connection mysql://keystone:YOUR_PASSWORD@controller/keystone
  4. Next create keystone database user by running below queries on your mysql prompt as root.
    CREATE DATABASE keystone;
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD';
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'YOUR_PASSWORD';
  5. Now create database tables
    su -s /bin/sh -c "keystone-manage db_sync" keystone

    Currently we don’t have any user accounts that can communicate with OpenStack services and Identity service. So we will setup an authorization token to use as a shared secret between the Identity Service and other OpenStack services and store in configuration file.

    ADMIN_TOKEN=$(openssl rand -hex 10)
    echo $ADMIN_TOKEN
    openstack-config --set /etc/keystone/keystone.conf DEFAULT \
       admin_token $ADMIN_TOKEN
  6. Keystone uses PKI tokens as default. Now create the signing keys and certificates to restrict access to the generated data
    keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
    chown -R keystone:keystone /etc/keystone/ssl
    chmod -R o-rwx /etc/keystone/ssl
  7. Start and enable the keystone identity service to begin at startup
    service openstack-keystone start
    chkconfig openstack-keystone on

    Keystone Identity service stores expired tokens as well in the database. We will create below crontab entry to purge the expired tokens

    (crontab -l -u keystone 2>&1 | grep -q token_flush) || \
    echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone
  8. Now we will create admin user for keystone and define roles for admin user
    export OS_SERVICE_TOKEN=$ADMIN_TOKEN
    export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
    keystone user-create --name=admin --pass=Your_Password --email=Your_Email
    keystone role-create --name=admin
    keystone tenant-create --name=admin --description="Admin Tenant"
    keystone user-role-add --user=admin --tenant=admin --role=admin
    keystone user-role-add --user=admin --role=_member_ --tenant=admin
    keystone user-create --name=pythian --pass= Your_Password --email=Your_Email
    keystone tenant-create --name=pythian --description="Pythian Tenant"
    keystone user-role-add --user=pythian --role=_member_ --tenant=pythian
    keystone tenant-create --name=service --description="Service Tenant"
  9. Now we create a service entry for the identity service
    keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
    keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
    --publicurl=http://controller:5000/v2.0 \
    --internalurl=http://controller:5000/v2.0 \
    --adminurl=http://controller:35357/v2.0
  10. Verify Identity service installation
    unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
  11. Request an authentication token by using the admin user and the password you chose for that user
    keystone --os-username=admin --os-password=Your_Password \
      --os-auth-url=http://controller:35357/v2.0 token-get
    keystone --os-username=admin --os-password=Your_Password \
      --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 \
      token-get
  12. We will save the required parameters in admin-openrc.sh as below
    export OS_USERNAME=admin
    export OS_PASSWORD=Your_Password
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://controller:35357/v2.0
  13. Next Next check if everything is working fine and keystone interacts with OpenStack services. We will source the admin-openrc.sh file to load the keystone parameters
    source /root/admin-openrc.sh
  14. List Keystone tokens using:
    keystone token-get
  15. List Keystone users using
    keystone user-list

If all the above commands give you the output, that means your Keystone Identity Service is all set up, and you can proceed to the next steps—In part four, I will discuss on how to configure and set up Image Service to store images.

Categories: DBA Blogs

Oracle E-Business Suite Updates From OpenWorld 2014

Pythian Group - Tue, 2014-10-14 08:29

Oracle OpenWorld has always been my most exciting conference to attend. I always see high energy levels everywhere, and it kind of revs me up to tackle new upcoming technologies. This year I concentrated on attending mostly Oracle E-Business Suite release 12.2 and Oracle 12c Database-related sessions.

On the Oracle E-Business Suite side, I started off with Oracle EBS Customer Advisory Board Meeting with great presentations on new features like the Oracle EBS 12.2.4 new iPad Touch-friendly interface. This can be enabled by setting “Self Service Personal Home Page mode” profile value to “Framework Simplified”. Also discussed some pros and cons of the new downtime mode feature of adop Online patching utility that allows  release update packs ( like 12.2.3 and 12.2.4 patch ) to be applied with out starting up a new online patching session. I will cover more details about that in a separate blog post. In the mean time take a look at the simplified home page look of my 12.2.4 sandbox instance.

Oracle EBS 12.2.4 Simplified Interface

Steven Chan’s presentation on EBS Certification Roadmap announced upcoming support for Android tablets Chrome Browser, IE11 and Oracle Unified Directory etc. Oracle did not extend any support deadlines for Oracle EBS 11i or R12 this time. So to all EBS customers on 11i: It’s time to move to R12.2. I also attended a good session on testing best practices for Oracle E-Business Suite, which had a good slide on some extra testing required during Online Patching Cycle. I am planning to do a separate blog with more details on that, as it is an important piece of information that one might ignore. Also Oracle announced a new product called Flow Builder that is part of Oracle Application Testing Suite, which helps users test functional flows in Oracle EBS.

On the 12c Database side, I attended great sessions by Christian Antognini on Adaptive Query Optimization and Markus Michalewicz sessions on 12c RAC Operational Best Practices and RAC Cache Fusion Internals. Markus Cachefusion presentation has some great recommendations on using _gc_policy_minimum instead of turning off DRM completely using _gc_policy_time=0. Also now there is a way to control DRM of a object using package DBMS_CACHEUTIL.

I also attended attended some new, upcoming technologies that are picking up in the Oracle space like Oracle NoSQL, Oracle Big Data SQL, and Oracle Data Integrator Hadoop connectors. These products seem to have great future ahead and have good chances of becoming mainstream in the data warehousing side of businesses.

Categories: DBA Blogs

Let the Data Guard Broker control LOG_ARCHIVE_* parameters!

The Oracle Instructor - Tue, 2014-10-14 08:20

When using the Data Guard Broker, you don’t need to set any LOG_ARCHIVE_* parameter for the databases that are part of your Data Guard configuration. The broker is doing that for you. Forget about what you may have heard about VALID_FOR – you don’t need that with the broker. Actually, setting any of the LOG_ARCHIVE_* parameters with an enabled broker configuration might even confuse the broker and lead to warning or error messages. Let’s look at a typical example about the redo log transport mode. There is a broker configuration enabled with one primary database prima and one physical standby physt. The broker config files are mirrored on each site and spfiles are in use that the broker (the DMON background process, to be precise) can access:

 OverviewWhen connecting to the broker, you should always connect to a DMON running on the primary site. The only exception from this rule is when you want to do a failover: That must be done connected to the standby site. I will now change the redo log transport mode to sync for the standby database. It helps when you think of the log transport mode as an attribute (respectively a property) of a certain database in your configuration, because that is how the broker sees it also.

 

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated

In this case, physt is a standby database that is receiving redo from primary database prima, which is why the LOG_ARCHIVE_DEST_2 parameter of that primary was changed accordingly:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 30 17:21:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="physt" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Configuration for physt

The mirrored broker configuration files on all involved database servers contain that logxptmode property now. There is no new entry in the spfile of physt required. The present configuration allows now to raise the protection mode:

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.

The next broker command is done to support a switchover later on while keeping the higher protection mode:

DGMGRL> edit database prima set property logxptmode=sync;
Property "logxptmode" updated

Notice that this doesn’t lead to any spfile entry; only the broker config files store that new property. In case of a switchover, prima will then receive redo with sync.

Configuration for primaNow let’s do that switchover and see how the broker ensures automatically that the new primary physt will ship redo to prima:

 

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    prima - Primary database
    physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to physt;
Performing switchover NOW, please wait...
New primary database "physt" is opening...
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "physt"

All I did was the switchover command, and without me specifying any LOG_ARCHIVE* parameter, the broker did it all like this picture shows:

Configuration after switchoverEspecially, now the spfile of the physt database got the new entry:

 

[oracle@uhesse2 ~]$ sqlplus sys/oracle@physt as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:43:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="prima", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="prima" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Not only is it not necessary to specify any of the LOG_ARCHIVE* parameters, it is actually a bad idea to do so. The guideline here is: Let the broker control them! Else it will at least complain about it with warning messages. So as an example what you should not do:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:57:11 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> alter system set log_archive_trace=4096;

System altered.

Although that is the correct syntax, the broker now gets confused, because that parameter setting is not in line with what is in the broker config files. Accordingly that triggers a warning:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with database setting

Fast-Start Failover: DISABLED

Configuration Status:
WARNING

DGMGRL> show database prima statusreport;
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
               prima    WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting

In order to resolve that inconsistency, I will do it also with a broker command – which is what I should have done instead of the alter system command in the first place:

DGMGRL> edit database prima set property LogArchiveTrace=4096;
Property "logarchivetrace" updated
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Thanks to a question from Noons (I really appreciate comments!), let me add the complete list of initialization parameters that the broker is supposed to control. Most but not all is LOG_ARCHIVE*

LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_LAG_TARGET
DB_FILE_NAME_CONVERT
LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_MAX_PROCESSES
LOG_ARCHIVE_MIN_SUCCEED_DEST
LOG_ARCHIVE_TRACE
LOG_FILE_NAME_CONVERT
STANDBY_FILE_MANAGEMENT


Tagged: Data Guard, High Availability
Categories: DBA Blogs

Digital Learning – LVC: It’s the attitude, stupid!

The Oracle Instructor - Mon, 2014-10-13 06:33

The single most important factor for successful digital learning is the attitude both of the instructor as well as of the attendees towards the course format. Delivery of countless Live Virtual Classes (LVCs) for Oracle University made me realize that. There are technical prerequisites of course: A reliable and fast network connection and the usage of a good headset is mandatory, else the participant is doomed from the start. Other prerequisites are the same as for traditional courses: Good course material, working lab environment for hands on practices and last not least knowledgeable instructors. For that part notice that we have the very same courseware, lab environments and instructors like for our classroom courses at Oracle University education centers also for LVCs. The major difference is in your head :-)

Delivering my first couple of LVCs, I felt quite uncomfortable with that new format. Accordingly, my performance was not as good as usual. Meanwhile, I consider the LVC format as totally adequate for my courses and that attitude enables me to deliver them with the same performance as my classroom courses. Actually, they are even better to some degree: I always struggle producing clean sketches with readable handwriting on the whiteboard. Now look at this MS paint sketch from one of my Data Guard LVCs:

Data Guard Real-Time Apply

Data Guard Real-Time Apply

Attendees get all my sketches per email if they like afterwards.

In short: Because I’m happy delivering through LVC today, I’m now able to do it with high quality. The attitude defines the outcome.

Did you ever have a teacher in school that you just disliked for some reason? It was hard to learn anything from that teacher, right? Even if that person was competent.

So this is also true on the side of the attendee: The attitude defines the outcome. If you take an LVC thinking “This cannot work!”, chances are that you are right just because of your mindset. When you attend an LVC with an open mind – even after some initial trouble because you need to familiarize yourself with the learning platform and the way things are presented there – it is much more likely that you will benefit from it. You may even like it better than classroom courses because you can attend from home without the time and expenses it takes to travel :-)

Some common objections against LVC I have heard from customers and my usual responses:

An LVC doesn’t deliver the same amount of interaction like a classroom course!

That is not necessarily so: You are in a small group (mostly less than 10) that is constantly in an audio conference. Unmute yourself and say anything you like, just like in a classroom. Additionally, you have a chatbox available. This is sometimes extremely helpful, especially with non-native speakers in the class :-) You can easily exchange email addresses using the chatbox as well and stay in touch even after the LVC.

I have no appropriate working place to attend an LVC!

You have no appropriate working place at all, then, for something that requires a certain amount of concentration. Talk to your manager about it – maybe there is something like a quiet room available during the LVC.

I cannot keep up the attention when starring the whole day on the computer screen!

Of course not, that is why we have breaks and practices in between the lessons.

Finally, I would love to hear about your thoughts and experiences with online courses! What is your attitude towards Digital Learning?


Tagged: Digital Learning, LVC
Categories: DBA Blogs

Partner Webcast Special Edition – Oracle Mobile Application Framework: Developer Challenge

Win up to $6.000 by developing a mobile application with Oracle Mobile Application Framework! Mobile technology has changed the way that we live and work. As mobile interfaces take the lion’s...

We share our skills to maximize your revenue!
Categories: DBA Blogs

grumpy old dba goes to the mountains ( have to ski somewhere )?

Grumpy old DBA - Sat, 2014-10-11 14:47
So in a slight reversal of submit so many presentation abstracts get rejected so many times the Rocky Mountain training days 2015 will be featuring me!  Wow pumped yikes this should be a ton of fun!

Ok to be totally honest Maria Colgan is keynoting not me ( shocking news still trying to get over it ).

Rocky Mountain training days 2015

Looks like a top batch of speakers and topics ... I only have to present at the same time as people like Alex Gorbachev / Bill Inmon / Heli Helskyaho ... so well maybe room is a little sparse but I will be attempting to bribe people into attending my session so we will see how that goes.

Thanks RMOUG!

PS on another topic more news soon on our GLOC 2015 conference coming up in May 2015!

Categories: DBA Blogs

OCP 12C – SQL Tuning

DBA Scripts and Articles - Fri, 2014-10-10 12:23

What’s new ? Oracle 12c introduces a major update called Adaptive Query Optimization which is based on : Adaptive execution plans Adaptive Statistics These two functionnalities are used to improve execution plans by using dynamic statistics gathered during the first part of the SQL execution. This allow to create more efficient plans that those using [...]

The post OCP 12C – SQL Tuning appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 2

Pythian Group - Fri, 2014-10-10 08:34

Today’s blog post is part two of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to do basic configuration setup to get started with OpenStack. In my first blog post, I explained why I decided to use OpenStack.

I am using a two-node setup in my environment, but you can still follow these steps and configure everything on single node. The below configuration reflects my setup. Kindly modify it as per your subnet and settings.

  • My home network has subnet of 192.168.1.0/24
  • My home PC which I am turning into controller node has IP of 192.168.1.140
  • MY KVM Hypervisor which I am turning to compute node has IP of 192.168.1.142
  1. It is advisable to have DNS setup in your intranet but just in case you don’t have it, you need to modify /etc/hosts file on both controller and compute node in order for OpenStack services to communicate to each other like below
    #Controller node
    192.168.1.140 controller
    #Compute node
    192.168.1.142 compute
  2. OpenStack services require a database to store information. You can use any database you are familiar with. I am using MySQL/MariaDB, as I am familiar with it. On the controller node, we will install the MySQL client and server packages, and the Python library.
     yum install -y mysql mysql-server MySQL-python
  3. Enable InnoDB, UTF-8 character set, and UTF-8 collation by default. To do that we need to modify /etc/my.cnf and set the following keys under [mysqld] section.
    default-storage-engine = innodb 
    innodb_file_per_table 
    collation-server = utf8_general_ci 
    init-connect = 'SET NAMES utf8' 
    character-set-server = utf8
  4. Start and enable the MySQL services
    service mysqld start
    chkconfig mysqld on
  5. Finally, set the root password for MySQL database. If you need further details about configuring the MySQL root password, there are many resources available online.
  6. On the compute node we need to install the MySQL Python library
    yum install -y MySQL-python
  7. Set up RDO repository on both controller and compute nodes
    yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
  8. I am using CentOS 6.2 so I need to have epel repo as well. This step is not required if you are using distro other then RHEL, CentOS, Scientific Linux etc.
    yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
  9. Install OpenStack utilities on both nodes and get started.
    yum install openstack-utils

Stay tuned for the remainder of my series, Deploying a Private Cloud at Home. In part three, we will continue configuring OpenStack services.

Categories: DBA Blogs

SQL Saturday Bulgaria 2014

Pythian Group - Fri, 2014-10-10 08:22

 

This Saturday October 11, I will be speaking at SQL Saturday Bulgaria 2014 in Sofia. It’s my first time in the country and I’m really excited to be part of another SQL Saturday :)

I will be speaking about Buffer Pool Extension, a new feature on SQL Server 2014. If you want to learn a little more about the new SQL Server version, don’t hesitate to attend the event. Looking forward to seeing you there!

Categories: DBA Blogs

Log Buffer #392, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-10-10 08:19

It seems its all about cloud these days. Even the hardware is being marketed with cloud in perspective. Databases like Oracle, SQL Server and MySQL are ahead in the cloud game and this Log Buffer Edition covers that all.


Oracle:

Oracle Database 12c was launched over a year ago delivering the next-generation of the #1 database, designed to meet modern business needs, providing a new multitenant architecture on top of a fast, scalable, reliable, and secure database platform.

Oracle OpenWorld 2014 Session Presentations Now Available.

Today, Oracle is using big data technology and concepts to significantly improve the effectiveness of its support operations, starting with its hardware support group.

Generating Sales Cloud Proxies using Axis? Getting errors?

How many page views can Apex sustain when running on Oracle XE?

SQL Server:

Send emails using SSIS and SQL Server instead of application-level code.

The public perception is that, when something is deleted, it no longer exists. Often that’s not really the case; the data you serve up to the cloud can be stored out there indefinitely, no matter how hard to try to delete it.

Every day, out in the various online forums devoted to SQL Server, and on Twitter, the same types of questions come up repeatedly: Why is this query running slowly? Why is SQL Server ignoring my index? Why does this query run quickly sometimes and slowly at others?

You need to set up backup and restore strategies to recover data or minimize the risk of data loss in case a failure happens.

Improving the Quality of SQL Server Database Connections in the Cloud

MySQL:

Low-concurrency performance for updates and the Heap engine: MySQL 5.7 vs previous releases.

Database Automation – Private DBaaS for MySQL, MariaDB and MongoDB with ClusterControl.

Removing Scalability Bottlenecks in the Metadata Locking and THR_LOCK Subsystems in MySQL 5.7.

The EXPLAIN command is one of MySQL’s most useful tools for understanding query performance. When you EXPLAIN a query, MySQL will return the plan created by the query optimizer.

Shinguz: Migration between MySQL/Percona Server and MariaDB.

Categories: DBA Blogs

Difference between 2014 and 2015 cadillac srx

Ameed Taylor - Fri, 2014-10-10 01:21
On the off chance that that you would be capable to't consider that any little Cadillac vehicles of the past, envision yourself fortunate, as not one or the other the Opel Omega-based completely Catera or Chevy Cavalier-based Cimarron offer specifically affectionate memories. serendipitously, the only thing that matters now could be the way that the Difference between 2014 and 2015 cadillac srx remains as a fantastic entrance in a class loaded with overachieving movement vehicles.

its a well known fact that the Cadillac other individuals have pointed the back wheel-drive 2015 cadillac escalade platinumsoundly at the balanced BMW 3 succession, which has laid out the portion for a considerable length of time. The 2015 cadillac fleetwood price outside measurements principally recreate those of the 3 accumulation, and the 2015 cadillac escalade redesign bargains pleasant develop quality, feisty effectiveness and an including weight together with a supple ride, much the same as the benchmark Bimmer. Cadillac's most up to date form likewise bargains an intelligent electronic interface with which to work all the nearby inside solace doohickeys, which is a vital component in this section of lavish cars.

The 2015 cadillac escalade first drive stacks up well against its opponent. On the expressway, it supplies great direction feel and a light-footed, shrewdly adjusted ride. Helping the sharp elements is reality that this Caddy is the lightest car in its classification (by utilizing 70-150 pounds, contingent upon trim). further adding to the ATS's physicality is its best 50/50 weight circulation between the passageway and back wheels.

With a trio of motor decisions close by, the 2015 cadillac hybrid productivity ranges from lukewarm to intriguing. the base 2.5-liter 4 serves as the expense and gas financial framework pioneer, actually assuming its 202-drive yield slacks in the again of the base motors found in the opposition. in the mean time, the turbocharged 2.0-liter inline-4 packs a superb midrange punch and is the main alternative inside the ATS extend that might be had with a manual gearbox. With 321 hp, the vivacious V6 offers a sweet soundtrack and is intelligently matched to a dreadfully responsive computerized transmission.

There are various minor contemplations with the ATS. aficionados may requirement for a handbook gearbox with the top motor, while the back seats and trunk are substantially less spacious than what a few adversaries give. indeed, this stage isn't exactly dispossessed of ability, either. The 2013 BMW three grouping still takes high respects by utilizing goodness of its progressed base powertrain and much more alluring driving flow, in any case it is normally ordinarily dearer. We're furthermore marginally partial to the correspondingly intelligently adjusted Audi A4, the refined Mercedes-Benz C-class and cost stuffed - if no more as cleaned - Infiniti G vehicle. however general, the 2013 Cadillac ATS is an extremely solid contender in the, exceptionally aggressive segment of reduced diversion vehicles.

2015 cadillac escalade premium
The 2015 cadillac escalade hp is a five-traveler, extravagance situated action vehicle that is given in four trim extents: base, sumptuous, productivity and top rate.

standard gimmicks on the bottom trim incorporate 17-inch combination wheels, warmed mirrors, mechanized headlights, journey control, twin-zone programmed atmosphere manage, six-way vitality front seats with vitality lumbar, leatherette premium vinyl upholstery, a tilt-and-extendable direction wheel, Onstar, Bluetooth telephone network and a seven-speaker Bose sound framework with satellite radio, an ipod/USB interface and an assistant sound jack.

the luxurious trim gives run-level tires, keyless entrance/ignition, far flung motor begin, eight-methodology force doorway seats, front and back park support, a rearview computerized cam, an auto-darkening rearview imitate, calfskin seating, driver memory works, a 60/forty part collapsing back seat (with move-thru), HD radio, Bluetooth sound streaming and the CUE infotainment interface.

The proficiency trim (no more accessible with 2.5-liter motor) further gives twin fumes outlets, a Driver cognizance bundle arrangement (forward crash caution, back cross-site guests alarm, path takeoff cautioning, programmed wipers and back seat feature airbags), a vivacious air grille, xenon headlights, an overhauled 10-speaker Bose encompass sound gadget (with a CD member), door action seats (with driver-aspect support change) and a set back seat with pass-through.


Stepping as much as the top class trim (not on hand with 2.5-liter engine) adds 18-inch wheels, a navigation machine, a color head-up display and the 60/forty cut up-folding rear seat. An 2015 cadillac escalade images top rate with rear-wheel force additionally comes with summer tires, a sport-tuned suspension, adaptive suspension dampers and a limited-slip rear differential.


among the features which can be usual for the upper trim ranges are to be had as options on the lower trims. just a few other not obligatory applications are also available. the driving force help package includes the features from the awareness package deal and provides adaptive cruise keep watch over, blind-spot monitoring, collision education with brake help, and the colour head-up show. The cold climate bundle contains heated entrance seats and a heated guidance wheel. The monitor performance bundle provides an engine oil cooler and upgraded brake pads. other options embrace totally different wheels, a sunroof and a trunk cargo organizer.

when will the 2015 cadillac escalade be available
the 2.5 fashions include a 2.5-liter 4-cylinder engine that produces 202 hp and one hundred ninety pound-toes of torque. the 2.0 Turbo fashions include a turbocharged 2.0-liter 4-cylinder rated at 272 hp and 260 lb-feet of torque. the 3.6 fashions include a three.6-liter V6 that cranks out 321 hp and 274 lb-feet of torque.

All 2015 cadillac escalade jalopnik engines come matched to a six-pace computerized transmission aside from the two.0 Turbo, which can also be had with a six-velocity handbook. Rear-wheel drive is standard across the board, with all-wheel power optional for the 2.zero- and 3.6-liter engines.

In Edmunds checking out, a rear-force ATS 2.0T with the manual went from zero to 60 mph in 6.3 seconds. A rear-drive ATS 3.6 top class with an automated accelerated from zero to 60 mph in 5.7 seconds. each times are reasonable among in a similar fashion powered entry-stage activity sedans.

EPA-estimated gasoline financial system for the ATS 2.5 stands at 22 mpg city/33 mpg freeway and 26 mpg blended. The V6 is estimated to succeed in 19/28/26 with rear-wheel power and Cadillac claims the two.zero-liter Turbo will get the same with an automatic transmission. With all-wheel power, the ATS V6 drops to 18/26/21.
build 2015 cadillac escalade
standard safety features for the new 2015 cadillac escalade for sale embody antilock disc brakes, traction regulate, balance keep an eye on, energetic front head restraints, front-seat aspect and knee airbags and entire-size aspect curtain airbags. additionally usual is OnStar, which includes automated crash notification, on-demand roadside help, far off door unlocking, stolen vehicle assistance and switch-with the aid of-turn navigation. non-compulsory are the aforementioned Driver consciousness and Driver assistance applications.

In Edmunds brake testing, an ATS 3.6 premium got here to a stop from 60 mph in an impressively short 108 feet. A 2.0T stopped in an ordinary distance of 113 feet.
2015 cadillac deville price
within its cabin, the photos of 2015 cadillac escalade boasts plenty of top quality materials, together with tasteful timber and steel accents. The on hand CUE infotainment interface features huge icons and operates like an iPhone or iPad, which is to say you use it by tapping, flicking, swiping or spreading your fingers -- making it familiar for a lot of users. moreover, "Haptic" comments allows you to understand while you've pressed a virtual button by using pulsing when you contact it.

Up front, the seats do a pleasant job of protecting one in situation right through spirited drives, and it is quite straightforward to find a comfortable riding position. Oddly, the not obligatory game seats do not present rather more in the best way of lateral enhance for the motive force, despite their energy-adjustable bolsters.

Rear-seat headroom is good, but knee room is tight for taller people. regardless of a wide opening, the 2015 cadillac escalade length trunk deals just 10.2 cubic ft of capability — downright stingy for this phase. fortuitously, some trims function a 60/forty split-folding rear seat, which helps on this regard.
2015 cadillac escalade pictures
The 2015 cadillac escalade vs infiniti qx80 is an impressive all-around performer, because of a poised experience, sure-footed cornering functionality and superb response from the guidance and brakes. the two.5-liter engine is smooth, however it offers tepid acceleration compared to other entry-stage powertrains, notably that of the BMW 328i. opt for one of the crucial different ATS engines, then again, and you can have no complaint, as they supply thrust extra in line with this Cadillac's athletic personality. even supposing fans could lament the lack of a guide transmission for the V6, the six-pace automated is tricky to fault. Switched to game mode, this automated is aware of just when to hold a equipment and provides smooth, rev-matched downshifts right on time, each time.

Even with its wearing calibration, the cadillac new models 2015 takes neglected city streets in stride, absorbing the shock of potholes and damaged pavement with out upsetting the automobile or its occupants. because of this, the compact Cadillac makes for a nice day by day driver that can also provide a whole lot of leisure on a Sunday morning power.


Categories: DBA Blogs