Skip navigation.

Feed aggregator

Learning Spark Lightning-Fast Big Data Analytics by Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia; O'Reilly Media

Surachart Opun - Sat, 2014-10-18 12:45
Apache Spark started as a research project at UC Berkeley in the AMPLab, which focuses on big data analytics. Spark is an open source cluster computing platform designed to be fast and general-purpose for data analytics - It's both fast to run and write. Spark provides primitives for in-memory cluster computing: your job can load data into memory and query it repeatedly much quicker than with disk-based systems like Hadoop MapReduce. Users can write applications quickly in Java, Scala or Python. In additional, it's easy to run standalone or on EC2 or Mesos. It can read data from HDFS, HBase, Cassandra, and any Hadoop data source.
If you would like a book about Spark - Learning Spark Lightning-Fast Big Data Analytics by Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia. It's a great book for who is interested in Spark development and starting with it. Readers will learn how to express MapReduce jobs with just a few simple lines of Spark code and more...
  • Quickly dive into Spark capabilities such as collect, count, reduce, and save
  • Use one programming paradigm instead of mixing and matching tools such as Hive, Hadoop, Mahout, and S4/Storm
  • Learn how to run interactive, iterative, and incremental analyses
  • Integrate with Scala to manipulate distributed datasets like local collections
  • Tackle partitioning issues, data locality, default hash partitioning, user-defined partitioners, and custom serialization
  • Use other languages by means of pipe() to achieve the equivalent of Hadoop streaming
With Early Release - 7 chapters. Explained Apache Spark overview, downloading and commands that should know, programming with RDDS (+ more advance) as well as working with Key-Value Pairs, etc. Easy to read and Good examples in a book. For people who want to learn Apache Spark or use Spark for Data Analytic. It's a book, that should keep in shelf.

Book: Learning Spark Lightning-Fast Big Data Analytics
Authors: Holden KarauAndy KonwinskiPatrick WendellMatei ZahariaWritten By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Going dotty – Generating a Filename containing a parameter value in SQL*Plus

The Anti-Kyte - Sat, 2014-10-18 12:01

As I have alluded to previously, I was not born in the UK.
Nope, my parents decided to up-sticks and move from London all the way to the
other side of the world, namely Auckland.
Then they had me. Then they came back.
To this day, they refuse to comment on whether these two events were related.

I went back to New Zealand a few years ago.
As I wandered around places that I hadn’t seen since I was five, it was strange how memories that I had forgotten came flooding back.
That last sentence doesn’t make much sense. It’s probably more accurate to say that memories I hadn’t thought about for years came flooding back.

I recently remembered something else I once knew, and then forgot – namely how to generate a SQL*Plus file name which includes a parameter value.

The scenario

I’ve got a script that lists all of the employees in a given department :

accept deptno prompt 'Enter Department ID : '
spool department.lis

select first_name, last_name
from hr.employees
where department_id = &deptno
order by employee_id
/

spool off

Now, rather than it just creating a file called department.lis, I want to create a file that includes the department number I’m querying.

Obvious…but wrong

You might think the following is a reasonable attempt to do this :

accept deptno prompt 'Enter Department ID : '
spool department_&deptno.lis

select first_name, last_name
from hr.employees
where department_id = &&deptno
order by employee_id
/

spool off

Unfortunately, SQL*Plus insists on being obtuse and outputting the following file :

ls
department_10lis.lst

It is at this point that a colleague came to the rescue ( thanks William)…

Going dotty

This will do the job…

accept deptno prompt 'Enter Department ID : '

spool department_&deptno..lis

select first_name, last_name
from hr.employees
where department_id = &deptno
order by employee_id
/

spool off

Run this and we not only get :

Enter Department ID : 10
old   3: where department_id = &deptno
new   3: where department_id = 10

FIRST_NAME           LAST_NAME
-------------------- -------------------------
Jennifer             Whalen

SQL> 

…we get a file, appropriately named :

ls
department_10.lis

The magic here is that the “.” character delimits the variable substitution.
Just to prove the point, we can do the same with a positional parameter :

set verify off

spool department_&1..lis

select first_name, last_name
from hr.employees
where department_id = &1
order by employee_id
/

spool off

…run this and we get :

SQL> @position_param.sql 10

FIRST_NAME           LAST_NAME
-------------------- -------------------------
Jennifer             Whalen

SQL> 

…and the appropriate file…

ls
department_10.lis

On that note, I’m off to the pub. Now, where did I leave my keys ?


Filed under: Oracle, SQL Tagged: spool; filename including a variable value, SQL*Plus

Video and Slides - Data Caching Strategies for Oracle Mobile Application Framework

Andrejus Baranovski - Sat, 2014-10-18 09:08
I have recorded a video tutorial, based on my OOW'14 session - Data Caching Strategies for Oracle Mobile Application Framework. ADF developers who could not attend OOW'14 in San Francisco, this is for you !

Here you can view the slides:


Data Caching Strategies for Oracle Mobile Application Framework
Watch the first part of the tutorial:


Watch the second part of the tutorial:


Described solution is based on the sample application from the blog post - Transactional Data Caching for ADF Mobile MAF Application.

Bandwidth and Latency

Hemant K Chitale - Sat, 2014-10-18 08:40
Here is, verbatim, an article I posted on Linked-In yesterday  [For other posts on Linked-In, view my Linked-In profile] :

Imagine an 8-lane highway. Now imagine a 4-lane highway. Which has the greater bandwidth ?Imagine your organisation sends its employees on a wekend "retreat" by bus. You have the choice of two locations, one that is 200kilometres away and the other is 80kilometres away. Assume that buses travel at a constant speed of 80kmph. Which resort will your employees get to faster ?The first question is about bandwidth. The second is about latency.(Why should I assume a fixed speed for the buses ? Because, I can assume a fixed speed at which electrons transfer over a wire or photons over a light channel).Expand the question further. What if the organisation needs to send 32 employees in a 40-seater bus. Does it matter that the bus can travel on an 8-lane highway versus a 4-lane highway (assuming minimal other traffic on the highways at that time) ?Too often, naive "architects" do not differentiate between the two. If my organisation needs to configure a standby (DR) location for the key databases and has a choice of two locations but varying types of network services, it should consider *both* bandwidth and latency. If the volume of redo is 1000MBytes per minute and this, factoring overheads for packetizing the "data", translates to 167Mbits per second, should I just go ahead and buy bandwidth of 200Mbits per second ? If the two sites have two different network services providers offering different bandwidths, should I simply locate at the site with the greater bandwidth ? What if the time it takes to synchronously write my data to site "A" is 4ms and the time to site "B" is 8ms ? Should I not factor the latency ? (I am assuming that the "write to disk" speed of hardware at either site is the same -- i.e. the hardware is the same). I can then add the complications of network routers and switches that add to the latency. Software configurations, flow-control mechanisms, QoS definitions and hardware configuration can also impact bandwidth and latency in different ways.Now, extend this to data transfers ("output" or "results") from a database server to an application server or end-user. If the existing link is 100Mbps and is upgraded to 1Gbps, the time to "report" 100 rows is unlikely to change as this time is a function of the latency. However, if the number of concurrent users grows from 10 to 500, the bandwidth requirement may increase and yet each user may still have the same "wait" time to see his results (assuming that there are no server hardware constraints returning results for 500 users).On the flip side, consider ETL servers loading data into a database. Latency is as important as bandwidth. An ETL scheme that does "row-by-row" loads relies on latency, not bandwidth. Increasing bandwidth doesn't help such a scheme.Think about the two.
Categories: DBA Blogs

Bed bugs in Boston – Analysis of Boston 311 public dataset

Nilesh Jethwa - Fri, 2014-10-17 11:22

Digging into the Boston public Dataset can reveal interesting and juicy facts.

Even though there is nothing juicy about Bed bugs but the data about Boston open cases for Bed bugs is quite interesting and worth looking at.

We uploaded the entire 50 mb data dump which is around 500K rows into the Data Visualizer and filtered the category for Bed Bugs. Splitting the date into its date hierarchy components we then plotted the month on the Y axis.

InfoCaptor : Analytics & dashboards

It seems that the City of Boston started collecting this data around 2011 and has only partial data for that year.

Interestingly, the number of Bed bug cases seem to rise during the summer months.

Now if we break the lines into Quarters (we just add the quarter hierarchy to the mix)

InfoCaptor : Analytics & dashboards

ShareThis

Public, private health care systems possess security vulnerabilities

Chris Foot - Fri, 2014-10-17 11:12

System and database administrators from health care institutions are facing several challenges.

On one hand, many are obligated to migrate legacy applications to state-of-the-art electronic health record solutions. In addition, they need to ensure the information contained in those environments is protected.

Operating systems, network configurations and a wealth of other factors can either make or break security architectures. If these components are unable to receive frequent updates from vendor-certified developers, it can cause nightmares for database administration professionals. 

Windows XP no longer a valid option 
When Microsoft ceased to provide to support for Windows XP in early April, not as many businesses upgraded to Windows 7 or 8 as the software vendor's leaders had hoped. This means those using XP will not receive regular security updates, leaving them open to attacks as hackers work to find vulnerabilities with the OS.

Despite continuous warnings from Microsoft and the IT community, Information Security Buzz contributor Rebecca Herold believes that the a large percentage of medical devices currently in use are running XP. Her allegations are based on reports submitted by health care electronics producers that stated they leverage XP for the sensors' graphical user interfaces, as well as to create a connection to external databases. 

Because Microsoft has yet to release the source code of XP, health care companies using these implementations have no way of identifying vulnerabilities independently. Even if the source code was distributed, it's unlikely that the majority of medical providers could use in-house resources to search for security flaws. The only way to defend the servers linked with devices running XP is to employ database active monitoring. 

Public sector experiencing vulnerabilities 
Healthcare.gov apparently isn't picture-perfect, either. Fed Scoop reported that white hat hackers working for the U.S. Department of Health and Human Services' Office of the Inspector General discovered that personally identifiable information was secured, but some data controlled by the Centers for Medicare and Medicaid Services lacked adequate protection

After an assessment of CMS and databases was completed, the IG advised the organization to encode files with an algorithm approved by Federal Information Processing Standards 140-2. However, authorities at the CMS deduced this wasn't necessary. 

Although this wasn't the first audit of Healthcare.gov (and it likely won't be the last), the information held within its servers is too valuable for cybercriminals to ignore. Setting up an automated, yet sophisticated intrusion detection program to notify DBAs when user activity appears inconsistent is a step the CMS should strongly consider taking. 

The post Public, private health care systems possess security vulnerabilities appeared first on Remote DBA Experts.

JPMorgan hack joins list of largest data breaches in history [VIDEO]

Chris Foot - Fri, 2014-10-17 09:02

Transcript

Hi, welcome to RDX. With news about data breaches sweeping the Web on a regular basis, it's no surprise that the latest victim was a major U.S. bank.

According to Bloomberg, hackers gained access to a server operated by JPMorgan Chase, stealing data on 76 million homes and 7 million small businesses.

After further investigation, the FBI discovered the hackers gained access to a server lacking two-factor authentication. From there, the hackers found fractures in the bank's custom software, through which JPMorgan's security team unknowingly gave them access to data banks.

To prevent such attacks from occurring, firms should regularly assess their databases and solutions to find vulnerabilities.

Thanks for watching! Be sure to visit us next time for info on RDX's security services.

The post JPMorgan hack joins list of largest data breaches in history [VIDEO] appeared first on Remote DBA Experts.

Debugging High CPU Usage Using Perf Tool and vmcore Analysis

Pythian Group - Fri, 2014-10-17 08:08

There are several tools and technologies available to debug deeper into high CPU utilization in a system; perf, sysrq, oprofile, vmcore, and more. In this post, I will narrate the course of debugging a CPU utilization issue using technologies like perf and vmcore.

Following sar output is from a system which faces high %system usage.

[root@prod-smsgw1 ~]# sar 1 14
Linux 2.6.32-431.20.5.el6.x86_64 (xxxxx) 08/08/2014 _x86_64_ (8 CPU)05:04:57 PM CPU %user %nice %system %iowait %steal %idle
05:04:58 PM all 2.90 0.00 15.01 0.38 0.00 81.72
05:04:59 PM all 2.02 0.00 10.83 0.13 0.00 87.03
05:05:00 PM all 3.27 0.00 13.98 0.76 0.00 81.99
05:05:01 PM all 9.32 0.00 16.62 0.25 0.00 73.80

From ‘man sar’.

%system
Percentage of CPU utilization that occurred while executing at the system level (kernel). Note
that this field includes time spent servicing hardware and software interrupts.

This means that the system is spending considerable time on catering to kernel code. System runs a java application which is showing high CPU usage.

perf – Performance analysis tools for Linux, is a good place to start in these kind of scenarios.

‘perf record’ command would capture system state for all cpus in perf.data file. -g would allow call graph and -p allows profiling a process.

‘perf report’ command would show the report.

Samples: 18K of event ‘cpu-clock’, Event count (approx.): 18445, Thread: java(3284), DSO: [kernel.kallsyms]
58.66% java [k] _spin_lock ?
31.82% java [k] find_inode ?
2.66% java [k] _spin_unlock_irqrestore ?
2.44% java [k] mutex_spin_on_owner

Here we can see that considerable time is spend in spinlock and find_inode code for the java application..

While investigation was going on, system crashed and dumped a vmcore. Vmcore is a memory dump of the system captured by tools like kdump.

I downloaded the debuginfo file and extracted the vmlinux to analyse the vmcore.

# wget wget http://debuginfo.centos.org/6/x86_64/kernel-debuginfo-2.6.32-431.20.5.el6.x86_64.rpm
# rpm2cpio kernel-debuginfo-2.6.32-431.20.5.el6.x86_64.rpm |cpio -idv ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux

Then ran following command.

# crash ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux /var/crash/127.0.0.1-2014-08-07-17\:56\:19/vmcoreKERNEL: ./usr/lib/debug/lib/modules/2.6.32-431.20.5.el6.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2014-08-07-17:56:19/vmcore [PARTIAL DUMP]
CPUS: 8
DATE: Thu Aug 7 17:56:17 2014
UPTIME: 1 days, 13:08:01
LOAD AVERAGE: 91.11, 91.54, 98.02
TASKS: 1417
NODENAME: xxxxx
RELEASE: 2.6.32-431.20.5.el6.x86_64
VERSION: #1 SMP Fri Jul 25 08:34:44 UTC 2014
MACHINE: x86_64 (2000 Mhz)
MEMORY: 12 GB
PANIC: “Oops: 0010 [#1] SMP ” (check log for details)
PID: 11233
COMMAND: “java”
TASK: ffff88019706b540 [THREAD_INFO: ffff880037a90000]
CPU: 6
STATE: TASK_RUNNING (PANIC)

From the vmcore I see that dtracedrv module was loaded and unloaded (possibly for running dtrace), this resulted in several warnings (first warning from ftrace is expected) and then kernel panicked as memory got corrupted. Instruction pointer is corrupted, which points to memory corruption. Looks like Panic was triggered by dtrace module.

/tmp/dtrace/linux-master/build-2.6.32-431.20.5.el6.x86_64/driver/dtrace.c:dtrace_ioctl:16858: assertion failure buf->dtb_xamot != cached
Pid: 8442, comm: dtrace Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1
Pid: 3481, comm: java Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1
Call Trace:
[] ? dump_cpu_stack+0x3d/0×50 [dtracedrv]
[] ? generic_smp_call_function_interrupt+0×90/0x1b0
[] ? smp_call_function_interrupt+0×27/0×40
[] ? call_function_interrupt+0×13/0×20
[] ? _spin_lock+0x1e/0×30
[] ? __mark_inode_dirty+0x6c/0×160
[] ? __set_page_dirty_nobuffers+0xdd/0×160
[] ? nfs_mark_request_dirty+0x1a/0×40 [nfs]
[] ? nfs_updatepage+0x3d2/0×560 [nfs]
[] ? nfs_write_end+0×152/0x2b0 [nfs]
[] ? iov_iter_copy_from_user_atomic+0×92/0×130
[] ? generic_file_buffered_write+0x18a/0x2e0
[] ? nfs_refresh_inode_locked+0x3e1/0xbd0 [nfs]
[] ? __generic_file_aio_write+0×260/0×490
[] ? __put_nfs_open_context+0×58/0×110 [nfs]
[] ? dtrace_vcanload+0×20/0x1a0 [dtracedrv]
[..]
BUG: unable to handle kernel paging request at ffffc90014fb415e
IP: [] 0xffffc90014fb415e
PGD 33c2b5067 PUD 33c2b6067 PMD 3e688067 PTE 0
Oops: 0010 [#1] SMP
last sysfs file: /sys/devices/system/node/node0/meminfo
CPU 6
Modules linked in: cpufreq_stats freq_table nfs fscache nfsd lockd nfs_acl auth_rpcgss sunrpc exportfs ipv6 ppdev parport_pc parport microcode vmware_balloon sg vmxnet3 i2c_piix4 i2c_core shpchp ext4 jbd2 mbcache sd_mod crc_t10dif vmw_pvscsi pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: dtracedrv]Pid: 11233, comm: java Tainted: P W ————— 2.6.32-431.20.5.el6.x86_64 #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
RIP: 0010:[] [] 0xffffc90014fb415e
RSP: 0018:ffff880037a91f70 EFLAGS: 00010246
RAX: 0000000000000001 RBX: 0000000000000219 RCX: ffff880037a91d40
RDX: 0000000000000001 RSI: 0000000000000001 RDI: 0000000000000000
RBP: 00007fba9a67f4c0 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: 00000000000003ff R12: 000000000001d4c0
R13: 0000000000000219 R14: 00007fb96feb06e0 R15: 00007fb96feb06d8
FS: 00007fb96fec1700(0000) GS:ffff880028380000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffffc90014fb415e CR3: 000000031e49e000 CR4: 00000000000407e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process java (pid: 11233, threadinfo ffff880037a90000, task ffff88019706b540)
Stack:
0000000000000000 0000000000002be1 ffffffff8100b072 0000000000000293
000000000000ebe6 0000000000002be1 0000000000000000 0000000000000007
00000030692df333 000000000001d4c0 0000000000000001 00007fb96feb06d8
Call Trace:
[] ? system_call_fastpath+0×16/0x1b
Code: Bad RIP value.
RIP [] 0xffffc90014fb415e
RSP
CR2: ffffc90014fb415e
crash>

This allowed me to have have a look at the CPU usage issue happening in the system. Other way to capture a vmcore is to manually panic the system using sysrq + c.

None of the runnable and uninterruptable_sleep processes are running for long time..

Looking at the oldest D state process..

crash> bt 4776
PID: 4776 TASK: ffff88027f3daaa0 CPU: 6 COMMAND: “java”
#0 [ffff88027f3dfd88] schedule at ffffffff815287f0
#1 [ffff88027f3dfe50] __mutex_lock_killable_slowpath at ffffffff8152a0ee
#2 [ffff88027f3dfec0] mutex_lock_killable at ffffffff8152a1f8
#3 [ffff88027f3dfee0] vfs_readdir at ffffffff8119f834
#4 [ffff88027f3dff30] sys_getdents at ffffffff8119f9f9
#5 [ffff88027f3dff80] system_call_fastpath at ffffffff8100b072
RIP: 00000030692a90e5 RSP: 00007fa0586c51e0 RFLAGS: 00000206
RAX: 000000000000004e RBX: ffffffff8100b072 RCX: 00007fa0cd2cf000
RDX: 0000000000008000 RSI: 00007fa0bc0de9a8 RDI: 00000000000001f6
RBP: 00007fa0bc004cd0 R8: 00007fa0bc0de9a8 R9: 00007fa0cd2fce58
R10: 00007fa0cd2fcaa8 R11: 0000000000000246 R12: 00007fa0bc004cd0
R13: 00007fa0586c5460 R14: 00007fa0cd2cf1c8 R15: 00007fa0bc0de980
ORIG_RAX: 000000000000004e CS: 0033 SS: 002b

Looking at its stack..

crash> bt -f 4776
PID: 4776 TASK: ffff88027f3daaa0 CPU: 6 COMMAND: “java”
[..]
#2 [ffff88027f3dfec0] mutex_lock_killable at ffffffff8152a1f8
ffff88027f3dfec8: ffff88027f3dfed8 ffff8801401e1600
ffff88027f3dfed8: ffff88027f3dff28 ffffffff8119f834
#3 [ffff88027f3dfee0] vfs_readdir at ffffffff8119f834
ffff88027f3dfee8: ffff88027f3dff08 ffffffff81196826
ffff88027f3dfef8: 00000000000001f6 00007fa0bc0de9a8
ffff88027f3dff08: ffff8801401e1600 0000000000008000
ffff88027f3dff18: 00007fa0bc004cd0 ffffffffffffffa8
ffff88027f3dff28: ffff88027f3dff78 ffffffff8119f9f9
#4 [ffff88027f3dff30] sys_getdents at ffffffff8119f9f9
ffff88027f3dff38: 00007fa0bc0de9a8 0000000000000000
ffff88027f3dff48: 0000000000008000 0000000000000000
ffff88027f3dff58: 00007fa0bc0de980 00007fa0cd2cf1c8
ffff88027f3dff68: 00007fa0586c5460 00007fa0bc004cd0
ffff88027f3dff78: 00007fa0bc004cd0 ffffffff8100b072crash> vfs_readdir
vfs_readdir = $4 =
{int (struct file *, filldir_t, void *)} 0xffffffff8119f7b0
crash>crash> struct file 0xffff8801401e1600
struct file {
f_u = {
fu_list = {
next = 0xffff88033213fce8,
prev = 0xffff88031823d740
},
fu_rcuhead = {
next = 0xffff88033213fce8,
func = 0xffff88031823d740
}
},
f_path = {
mnt = 0xffff880332368080,
dentry = 0xffff8802e2aaae00
},

[..]

crash> mount|grep ffff880332368080
ffff880332368080 ffff88033213fc00 nfs nanas1a.m-qube.com:/vol/test /scratch/test/test.deploy/test/test-internal

The process was waiting while reading from above nfs mount.

Following process seems to the culprit.

crash> bt 9104
PID: 9104 TASK: ffff8803323c8ae0 CPU: 0 COMMAND: “java”
#0 [ffff880028207e90] crash_nmi_callback at ffffffff8102fee6
#1 [ffff880028207ea0] notifier_call_chain at ffffffff8152e435
#2 [ffff880028207ee0] atomic_notifier_call_chain at ffffffff8152e49a
#3 [ffff880028207ef0] notify_die at ffffffff810a11ce
#4 [ffff880028207f20] do_nmi at ffffffff8152c0fb
#5 [ffff880028207f50] nmi at ffffffff8152b9c0
[exception RIP: _spin_lock+30]
RIP: ffffffff8152b22e RSP: ffff88001d209b88 RFLAGS: 00000206
RAX: 0000000000000004 RBX: ffff88005823dd90 RCX: ffff88005823dd78
RDX: 0000000000000000 RSI: ffffffff81fd0820 RDI: ffffffff81fd0820
RBP: ffff88001d209b88 R8: ffff88017b9cfa90 R9: dead000000200200
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88005823dd48
R13: ffff88001d209c68 R14: ffff8803374ba4f8 R15: 0000000000000000
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
— —
#6 [ffff88001d209b88] _spin_lock at ffffffff8152b22e
#7 [ffff88001d209b90] _atomic_dec_and_lock at ffffffff81283095
#8 [ffff88001d209bc0] iput at ffffffff811a5aa0
#9 [ffff88001d209be0] dentry_iput at ffffffff811a26c0
#10 [ffff88001d209c00] d_kill at ffffffff811a2821
#11 [ffff88001d209c20] __shrink_dcache_sb at ffffffff811a2bb6
#12 [ffff88001d209cc0] shrink_dcache_parent at ffffffff811a2f64
#13 [ffff88001d209d30] proc_flush_task at ffffffff811f9195
#14 [ffff88001d209dd0] release_task at ffffffff81074ec8
#15 [ffff88001d209e10] wait_consider_task at ffffffff81075cc6
#16 [ffff88001d209e80] do_wait at ffffffff810760f6
#17 [ffff88001d209ee0] sys_wait4 at ffffffff810762e3
#18 [ffff88001d209f80] system_call_fastpath at ffffffff8100b072

From upstream kernel source..

/**
* iput – put an inode
* @inode: inode to put
*
* Puts an inode, dropping its usage count. If the inode use count hits
* zero, the inode is then freed and may also be destroyed.
*
* Consequently, iput() can sleep.
*/
void iput(struct inode *inode)
{
if (inode) {
BUG_ON(inode->i_state & I_CLEAR);if (atomic_dec_and_lock(&inode->i_count, &inode->i_lock))
iput_final(inode);
}
}
EXPORT_SYMBOL(iput);#include
/**
* atomic_dec_and_lock – lock on reaching reference count zero
* @atomic: the atomic counter
* @lock: the spinlock in question
*
* Decrements @atomic by 1. If the result is 0, returns true and locks
* @lock. Returns false for all other cases.
*/
extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
#define atomic_dec_and_lock(atomic, lock) \
__cond_lock(lock, _atomic_dec_and_lock(atomic, lock))

#endif /* __LINUX_SPINLOCK_H */

Looks like the process was trying to drop dentry cache and was holding to the spinlock while dropping an inode associated with it. This resulted in other processes waiting on spinlock, resulting in high %system utilization.

When the system again showed high %sys usage I checked and found large slab cache.

[root@xxxxx ~]# cat /proc/meminfo
[..]
Slab: 4505788 kB
SReclaimable: 4313672 kB
SUnreclaim: 192116 kB

Checking slab in a running system using slabtop, I saw that nfs_inode_cache is the top consumer.

ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
[..]
2793624 2519618 90% 0.65K 465604 6 1862416K nfs_inode_cache

I ran ‘sync’ and then ‘echo 2 > /proc/sys/vm/drop_caches’ to drop the dcache, which fixed the high %sys usage in the system.

[root@xxxxx ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)11:04:45 AM CPU %user %nice %system %iowait %steal %idle
11:04:46 AM all 1.51 0.00 13.22 0.50 0.00 84.76
11:04:47 AM all 1.25 0.00 12.55 0.13 0.00 86.07
11:04:48 AM all 1.26 0.00 8.83 0.25 0.00 89.66
11:04:49 AM all 1.63 0.00 11.93 0.63 0.00 85.80
^C
[root@xxxxx ~]# sync
[root@xxxxx ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)11:05:23 AM CPU %user %nice %system %iowait %steal %idle
11:05:24 AM all 1.50 0.00 13.03 0.75 0.00 84.71
11:05:25 AM all 1.76 0.00 9.69 0.25 0.00 88.30
11:05:26 AM all 1.51 0.00 9.80 0.25 0.00 88.44
11:05:27 AM all 1.13 0.00 10.03 0.25 0.00 88.60
^C
[root@xxxxx ~]# echo 2 > /proc/sys/vm/drop_caches
[root@xxxxx ~]# cat /proc/meminfo
[..]
Slab: 67660 kB

[root@prod-smsgw4 ~]# sar 1 10
Linux 3.10.50-1.el6.elrepo.x86_64 (prod-smsgw4.sav.mqube.us) 08/12/2014 _x86_64_ (8 CPU)

11:05:58 AM CPU %user %nice %system %iowait %steal %idle
11:05:59 AM all 1.64 0.00 1.38 0.13 0.00 96.86
11:06:00 AM all 2.64 0.00 1.38 0.38 0.00 95.60
11:06:01 AM all 2.02 0.00 1.89 0.25 0.00 95.84
11:06:02 AM all 2.03 0.00 1.39 4.68 0.00 91.90
11:06:03 AM all 8.21 0.00 2.27 2.65 0.00 86.87
11:06:04 AM all 1.63 0.00 1.38 0.13 0.00 96.86
11:06:05 AM all 2.64 0.00 1.51 0.25 0.00 95.60

From kernel documentation,

drop_cachesWriting to this will cause the kernel to drop clean caches, dentries and
inodes from memory, causing that memory to become free.To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
echo 3 > /proc/sys/vm/drop_caches

java application was traversing through nfs and was accessing large number of files, resulting in large number of nfs_inode_cache entries, resulting in in a large dcache.

Tuning vm.vfs_cache_pressure would be a persistent solution for this.

From kernel documentation,

vfs_cache_pressure
——————Controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects.At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a “fair” rate with respect to pagecache and
swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.

Categories: DBA Blogs

NZOUG14 Beckons

Pythian Group - Fri, 2014-10-17 07:50

New Zealand is famous for Kiwis, pristine landscape, and the New Zealand Oracle User Group (NZOUG) conference.  The location of choice is New Zealand when it comes to making Lord of the Rings and making Oracle Lord of the Databases.

NZOUG 2014 will be held 19–21 November in the Owen G. Glenn Building at the University of Auckland. The main conference will be held on the 20th and 21st, preceded by a day of workshops on the 19th. It’s one of the premier Oracle conferences in Southern hemisphere.

Where there is Oracle, there is Pythian. Pythian will be present in full force in NZOUG 2014.

Following are Pythian sessions at NZOUG14:

12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
Fahd Mirza Chughtai

Everyone Talks About DR – But Why So Few Implement It
Francisco Munoz Alvarez

DBA 101: Calling All New Database Administrators
Gustavo Rene Antunez

My First 100 Days with an Exadata
Gustavo Rene Antunez

Do You Really Know the Index Structures?
Deiby Gómez

Oracle Exadata: Storage Indexes vs Conventional Indexes
Deiby Gómez

Oracle 12c Test Drive
Francisco Munoz Alvarez

Why Use OVM for Oracle Database
Francisco Munoz Alvarez

Please check the full agenda of NZOUG14 here.

Categories: DBA Blogs

Log Buffer #393, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-10-17 07:47

Bloggers get connected to both the databases and their readers through their blogs. Bloggers act like a bridge here. Log Buffer extends this nexus through the Log Buffer Edition.

Oracle:

MS Sharepoint and Oracle APEX integration.

Just a couple of screenshots of sqlplus+rlwrap+cygwin+console.

Say “Big Data” One More Time (I dare you!)

Update OEM Harvester after 12.1.0.4 Upgrade

Insight in the Roadmap for Oracle Cloud Platform Services.

SQL Server:

Troubleshoot SQL P2P replication doesn’t replicate DDL schema change.

Set-based Constraint Violation Reporting in SQL Server.

Where do you start fixing a SQL Server crash when there isn’t a single clue?

A permission gives a principal access to an object to perform certain actions on or with the object.

When you can’t get to your data because another application has it locked, a thorough knowledge of SQL Server concurrency will give you the confidence to decide what to do.

MySQL:

MySQL 5.7.5- More variables in replication performance_schema tables.

Multi-source replication for MySQL has been released as a part of 5.7.5-labs-preview downloadable from labs.mysql.com.

How to install multiple MySQL instances on a single host using MyEnv?

Percona Toolkit for MySQL with MySQL-SSL Connections.

InnoDB: Supporting Page Sizes of 32k and 64k.

Categories: DBA Blogs

sreadtim

Jonathan Lewis - Fri, 2014-10-17 06:22

Here’s a question that appeared in my email a few days ago:

 

Based on the formula: “sreadtim = ioseektim + db_block_size/iotrfrspeed” sreadtim should always bigger than ioseektim.

But I just did a query on my system, find it otherwise, get confused,

SQL> SELECT * FROM SYS.AUX_STATS$;<

SNAME                          PNAME                               PVAL1 PVAL2
------------------------------ ------------------------------ ---------- --------------------
SYSSTATS_INFO                  STATUS                                    COMPLETED
SYSSTATS_INFO                  DSTART                                    10-08-2014 10:45
SYSSTATS_INFO                  DSTOP                                     10-10-2014 10:42
SYSSTATS_INFO                  FLAGS                                   1
SYSSTATS_MAIN                  CPUSPEEDNW                     680.062427
SYSSTATS_MAIN                  IOSEEKTIM                              10
SYSSTATS_MAIN                  IOTFRSPEED                           4096
SYSSTATS_MAIN                  SREADTIM                            4.716
SYSSTATS_MAIN                  MREADTIM                            2.055
SYSSTATS_MAIN                  CPUSPEED                             1077
SYSSTATS_MAIN                  MBRC                                    4
SYSSTATS_MAIN                  MAXTHR                          956634112
SYSSTATS_MAIN                  SLAVETHR                           252928

How do we explain this ?

 

This question highlights two points – one important, the other only slightly less so.

The really important point is one of interpretation.  Broadly speaking we could reasonably say that the (typical) time required to perform a single block read is made up of the (typical) seek time plus the transfer time which, using the names of the statistics above, would indeed give us the relationship: sreadtim = ioseektim + db_block_size/iotfrspeed; but we have to remember that we are thinking of a simplified model of the world. The values that we capture for sreadtim include the time it takes for a request to get from Oracle to the O/S, through the various network software and hardware layers and back again, the formula ignores those components completely and, moreover, doesn’t allow for the fact that some “reads” could actually come from one of several caches without any physical disc access taking place; similarly we should be aware that the time for an actual I/O seek would vary dramatically with the current position  of the read head, the radial position of the target block, the speed and current direction of movement of the read head, and the rotational distance to the target block. The formula is not attempting to express a physical law, it is simply expressing an approximation that we might use in a first line estimate of performance.

In fact we can see in the figures above that multi-block reads (typically of 4 blocks)  were faster than single block reads on this hardware for the duration of the sampling period – and that clearly doesn’t fit with the simple view embedded in our formula of how disc drives work.  (It’s a fairly typical effect of SANs, of course, that large read requests make the SAN software start doing predictive read-ahead, so that the next read request from Oracle may find that the SAN has already loaded the required data into its cache.)

There is, however, the second point that these figures highlight – but you have to be in the know to spot the detail: whatever the complexities introduced by SAN caching, we’re notlooking at the right figures. The ioseektim and iotfrspeed shown here are the default values used by Oracle. It looks as if the user has called dbms_stats.gather_system_stats() with a 48 hour workload (8th Oct to 10th Oct), but hasn’t yet executed the procedure using the ‘noworkload’ option. Perhaps the ioseektim and iotfrspeed figures from a noworkload call would look a little more reasonable when compared with the 4.716 milliseconds of the workload single block read. There may still be a large gap between the model and the reality, but until the two sets of figures we’re using come from the same place we shouldn’t even think about comparing them.


Using rlwrap with Apache Hive beeline for improved readline functionality

Rittman Mead Consulting - Fri, 2014-10-17 06:18

rlwrap is a nice little wrapper in which you can invoke commandline utilities and get them to behave with full readline functionality just like you’d get at the bash prompt. For example, up/down arrow keys to move between commands, but also home/end to go to the start/finish of a line, and even ctrl-R to search through command history to rapidly find a command. It’s one of the standard config changes I’ll make to any system with Oracle’s sqlplus on, and it works just as nicely with Apache Hive’s commandline interface, beeline.

beeline comes with some of this functionality (up/down arrow) but not all (for me, it was ‘home’ and ‘end’ not working and printing 1~ and 5~ respectively instead that prompted me to setup rlwrap with it).

Installing rlwrap

To install rlwrap simply add the EPEL yum packages to your repository configuration:

sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/`uname -p`/epel-release-6-8.noarch.rpm

and then install rlwrap from yum:

sudo yum install -y rlwrap

Use

Once rlwrap is installed you can invoke beeline through it manually, specifying all the standard beeline options as you would normally: (I’ve used the \ line continuation character here just to make the example nice and clear)

rlwrap -a beeline \
-u jdbc:hive2://bdanode1:10000 \
-n rmoffatt -p password \
-d org.apache.hive.jdbc.HiveDriver

Now I can connect to beeline, and as before I press up arrow to access commands from when I previously used the tool, but I can also hit ctrl-R to start typing part of a command to recall it, just like I would in bash. Some other useful shortcuts:

  • Ctrl-lclears the screen but with the current line still shown
  • Ctrl-kdeletes to the end of the line from the current cursor position
  • Ctrl-udeletes to the beginning of the line from the current cursor position
  • Esc-fmove forward one word
  • Esc-bmove backward one word
    (more here)

And most importantly, Home and End work just fine! (or, ctrl-a/ctrl-e if you prefer).

NB the -a argument for rlwrap is necessary because beeline already does some readline-esque functions, and we want rlwrap to forceable override them (otherwise neither work very well). Or more formally (from man rlwrap):

Always remain in “readline mode”, regardless of command’s terminal settings. Use this option if you want to use rlwrap with commands that already use readline.

Alias

A useful thing to do is to add an alias directly in your profile so that it is always available to launch beeline under rlwrap, in this case as the rlbeeline command:

# this sets up "rlbeeline" as the command to run beeline
# under rlwrap, you can call it what you want though. 
cat >> ~/.bashrc<<EOF
alias rlbeeline='rlwrap -a beeline'
EOF
# example usage:
# rlbeeline /
# -u jdbc:hive2://bdanode1:10000 /
# -n rmoffatt -p password /
# -d org.apache.hive.jdbc.HiveDriver

If you want this alias available for all users on a machine create the above as a standalone .sh file in /etc/profile.d/.

Autocomplete

One possible downside of using rlwrap with beeline is that you lose the native auto-complete option within beeline for the HiveQL statements. But never fear – we can have the best of both worlds, with the -f argument for rlwrap, specifying a list of custom auto-completes. So this is even a level-up for beeline, because we could populate it with our own schema objects and so on that we want auto-completed.

As a quick-start, run beeline without rlwrap, hit tab-twice and then ‘y’ to show all options and paste the resulting list into a text file (eg beeline_autocomplete.txt). Now call beeline, via rlwrap, passing that file as an argument to rlwrap:

rlwrap -a -f beeline_autocomplete.txt beeline

Once connected, use auto-complete just as you would normally (hit tab after typing a character or two of the word you’re going to match):

Connecting to jdbc:hive2://bdanode1:10000
Connected to: Apache Hive (version 0.12.0-cdh5.0.1)
[...]
Beeline version 0.12.0-cdh5.0.1 by Apache Hive
0: jdbc:hive2://bdanode1:10000> SE
SECOND        SECTION       SELECT        SERIALIZABLE  SERVER_NAME   SESSION       SESSION_USER  SET
0: jdbc:hive2://bdanode1:10000> SELECT

Conclusion

rlwrap is the tool that keeps on giving; just as I was writing this article, I noticed that it also auto-highlights opening parentheses when typing the closing one. Nice!

Categories: BI & Warehousing

Windows 10: The new features

Yann Neuhaus - Fri, 2014-10-17 02:03

First, it should have been Windows 9, but finally the new Microsoft operating system is named Windows 10! In the same way as for my blog post on Windows 8, I have decided to write a few words about the new features of Windows 10.

Documentum upgrade project: Configuring NTLM SSO for D2 3.1 SP1

Yann Neuhaus - Thu, 2014-10-16 20:11

The Documentum D2 3.1 SP1 release is kind of a mix between D2 4.1 APIs (using D2FS in backend) and D2 3.1 front-end. It needs SSO to be fully implemented and the configuration has to be applied for 3.1 as well as for D2FS. For D2FS, the same configuration applies whether you are using the NT Lan Manager (NTLM) or Kerberos authentication.

If you want to implement Kerberos Single Sign On instead of NTLM, have a look at this blog post: http://www.dbi-services.com/index.php/blog/entry/kerberos-sso-with-documentum-d2-31-sp1

 

1. NTLM configuration for D2 3.1 SP1

The D2 3.1 documentation explains how to configure NTLM for D2 3.1 SP1.

Referring to the D2 3.1 installation guide, you can see the following:

 

Locate the file « shiro.ini » used by D2 applications and add the following lines:

 

 

[main]

D2-NTLM=eu.c6.d2.web.filters.authc.D2NtlmHttpAuthenticationFilter

D2-NTLM.domainController=<domain controller>

D2-NTLM.domainName=<domain name>

D2-NTLM.domainUser=<domain user to authentify>

D2-NTLM.domainPassword=<user's passwords>

D2-NTLM.docbases=<docbase1,superUser1,password1,domain1|docbase2,...>

 

[urls]

/** = D2-NTLM

 

 

“docbaseX”: corresponds to a docbase using NTLM

“loginX”: corresponds to a supersuser login of “docbaseX”

“passwordX”: corresponds to an encrypted password of the superuser of “docbaseX”.

 

In our case, the file is located in the following path: <Tomcat root>/webapps/D2-Client/WEB-INF/classes/  

At first look, everything is there. However, some clarifications are very welcome.

 

About Active directory connection:

  • <domain controller>: enter the domain controller IP address

  • <domain name>: This is the active directory domain. You must write “InternalDom” for “InternalDom\userloginname” user principal name.

  • <domain user to authentify>: User name for the authentication concerning the domain controller. You must write “userloginname” for “InternalDom\userloginname” user principal name.

 

About Documentum repository connection:

  • <docbaseX>: enter the name of the docbase

  • <superUserX>: enter a user name which is a super user for docbaseX

  • <passwordX>: enter encoded password for related super user name

 

2. NTLM configuration for D2FS

2.1 D2 3.1 SP1

You must be aware - at least since patch 02 for D2 3.1 SP1 - that the way to store the password for the related super user name has changed.

Referring to D2 4.1 installation guide, you can see the following:

 

If d2fs-trust.properties does not exist, create the file in the webapps/D2/WEB-INF/classes/ folder using a text editor. Open d2fs-trust.properties in the folder webapps/D2/WEB-INF/classes/ and add following lines:

 

 

*.user=<administrator user>

*.password=<encoded password>

*.domain=<your domain> [not mandatory]

#or for each repository

<repository>.user=<administrator user>

<repository>.password=<encoded password>

<repository>.domain=<your domain>

 

 

  • Repository corresponds to the repository.
  • User and password are the username and password of an inline Super User account in the
  • repository.
  • Domain means the domain of the repository and can be left out for inline accounts.

 

In our case, the file is located in the following path: <Tomcat root>/webapps/D2-Client/WEB-INF/classes/

Everything is there. However, again, some clarifications are welcome:

  • <repository>: replace it with the name of the docbase

  • <administrator user>: enter a user name which is a super user for docbaseX.

  • <encoded password>: enter the password for the related super user name.

  • <domain>: Active directory domain You must write “InternalDom” for “InternalDom\userloginname” user principal name.

 

2.2 D2 3.1 SP1 P02

Since this release of D2, you must store the <encoded password> in the D2 lockbox.

Make sure you have installed the lockbox functionality properl, and that it is already working between D2 and its Java Method Server.

Then you can remove all lines related to passwords in the d2fs-trust.properties files:

 

 

*.user=<administrator user>

*.domain=<your domain> [not mandatory]

#or for each repository

<repository>.user=<administrator user>

<repository>.domain=<your domain>

 

 

Then, you can execute the following command:


 

java -classpath "<Tomcat root>\webapps\D2-Client\WEB-INF\lib\*" com.emc.common.java.crypto.SetLockboxProperty <D2 lockbox path>D2FS-trust.<Repository name>.password <user password>

 

 

Where:

  • <Tomcat root>: Root path of the tomcat instance

  • <D2 lockbox path>: Folder path where the D2.lockbox is stored

  • <Repository name>: Name of the repository

  • <user password>: Clear password of super user name setup in d2fs-trust.properties file

 

Make sure “D2FS” is in uppercase.

 

3. Working Example with lockbox

We will now see a few examples of working configurations. Obviously, this setup may not be the only one working to achieve the goal for Single Sign On authentication. You are also certainly able to identify where some adaptation can be performed.

 

Suppose we have following environment (with the following information):

MS domain controller address : “10.0.0.1”

MS domain name: “InternalDomain”

MS domain user principal name: “InternalDomain\DomUser”

MS domain user password: “DomPasswd”

 

Tomcat root: “C:\Tomcat”

Lockbox file location: “C:\Lockbox\d2.lockbox”

 

First repository name: “DCTMRepo1”

Second repository name: “DCTMRepo2”

 

Ensure that you have stopped all D2-Client application instances on the application server, as well as for the D2-Config.

 

3.1 Inline super user creation

The user you are going to create must have the following attributes:

 

 

- State: Active

- Name: SSOAdmin

- Login Name: SSOAdmin

- Login Domain: InternalDomain

- Password: RepoPasswd

- User Source: Inline Password

- Privileges: Superuser

- Extended Privileges: None

- Client Capability: Consumer

 

 

Create a user for all repositories. In this example, we will see it as given that the same has be done for both repositories.

 

3.2 Shiro.ini file content

First, we must encode the password of the MS domain user name and the SSOAdmin:

 

 

java -classpath "C:\Tomcat\webapps\D2-Client\WEB-INF\lib\*" com.emc.d2.api.utils.GetCryptedPassword DomPasswd

UCmaB39fRLM6gRj/Gy3MJA==

 

java -classpath "C:\Tomcat\webapps\D2-Client\WEB-INF\lib\*" com.emc.d2.api.utils.GetCryptedPassword RepoPasswd

8RLQerkftOBCedjQNEz57Q==

 

 

Then, we can fill in the file:

 

 

D2-NTLM=eu.c6.d2.web.filters.authc.D2NtlmHttpAuthenticationFilter

D2-NTLM.domainController=10.0.0.1

D2-NTLM.domainName=InternalDomain

D2-NTLM.domainUser=DomUser

D2-NTLM.domainPassword=UCmaB39fRLM6gRj/Gy3MJA==

D2- NTLM.docbases=DCTMRepo1,SSOAdmin,8RLQerkftOBCedjQNEz57Q==,InternalDomain|DCTMRepo2,SSOAdmin,8RLQerkftOBCedjQNEz57Q==,InternalDomain

 

[urls]

/** = D2-NTLM

 

 

3.3 d2fs-trust.properties file content


 

DCTMRepo1.user=SSOAdmin

DCTMRepo1.domain=InternalDomain

DCTMRepo2.user=SSOAdmin

DCTMRepo2.domain=InternalDomain

 


3.4 D2.lockbox password store


 

java -classpath "C:\Tomcat\webapps\D2-Client\WEB-INF\lib\*" com.emc.common.java.crypto.SetLockboxProperty C:\LockboxD2FS-trust.DCTMRepo1.password RepoPasswd

 

java -classpath "C:\Tomcat\webapps\D2-Client\WEB-INF\lib\*" com.emc.common.java.crypto.SetLockboxProperty C:\LockboxD2FS-trust.DCTMRepo2.password RepoPasswd

 

 

That's it. Restart the D2-Client application and test it.

Thanks for reading!

Oracle Cloud Control 12c: removing an agent is much easier in OEM 12.1.0.4

Yann Neuhaus - Thu, 2014-10-16 19:54

To remove an agent in the previous Oracle OEM Cloud Control 12c versions, you first had to delete its targets, stop the agent, and remove the host target. Only then were you able to remove the agent. In version 12.1.0.4, the decommission agent feature has been greatly improved.

From the agent menu, you select the new feature Agent Decommission:

 

ag1

 

The agent must be in stopped status to be decommissioned:

 

ag2

ag3

 

It shows the targets which will be removed.

 

ag4

 

In a very short time, the targets are delete, and the agent is removed.

The agent decommissioning feature is just a little new feature, but this will greatly simplify the agent emoval procedure :=)

NZOUG14 Beckons

Pakistan's First Oracle Blog - Thu, 2014-10-16 19:24
New Zealand is famous for Kiwis, pristine landscape, and New Zealand Oracle User Group (NZOUG) conference.  The location of choice is New Zealand when it comes to making Lord of the Rings and making Oracle Lord of the Databases.


NZOUG 2014 will be held 19–21 November in the Owen G. Glenn Building at the University of Auckland. The main conference will be held on the 20th and 21st, preceded by a day of workshops on the 19th. It's one of the premier Oracle conferences in Southern hemisphere.

Where there is Oracle, there is Pythian. Pythian will be present in full force in NZOUG 2014.

Following are Pythian sessions at NZOUG14:

12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
Fahd Mirza Chughtai

Everyone Talks About DR – But Why So Few Implement It
Francisco Munoz Alvarez

DBA 101: Calling All New Database Administrators
Gustavo Rene Antunez

My First 100 Days with an Exadata
Gustavo Rene Antunez

Do You Really Know the Index Structures?
Deiby Gómez

Oracle Exadata: Storage Indexes vs Conventional Indexes
Deiby Gómez

Oracle 12c Test Drive
Francisco Munoz Alvarez

Why Use OVM for Oracle Database
Francisco Munoz Alvarez
Please check the full agenda of NZOUG14 here.
Categories: DBA Blogs

MS Sharepoint and Oracle APEX integration

Dimitri Gielis - Thu, 2014-10-16 15:11
At Oracle Open World I gave a presentation about the integration of Microsoft Sharepoint and Oracle Application Express (APEX).

I see a lot of companies using Microsoft Sharepoint as portal for their intranet. For many people it’s the first place they go to when they start their day. But to do their job they make also use of other applications, some which are build in Oracle Application Express (APEX). This presentation will show the different options you have to integrate both worlds of Sharepoint and APEX.

The integration can be both ways;
  • in Sharepoint you get data or screens from APEX 
  • in APEX you want to use data (or a screen) maintained and coming from Sharepoint. 


In the next weeks I'll add some more detailed blog posts, things I told during the presentation... for example how to setup your own MS Sharepoint environment.
Categories: Development

Oracle Priority Support Infogram for 16-OCT-2014

Oracle Infogram - Thu, 2014-10-16 14:41

RDBMS
From Pythian: Log Buffer #391, A Carnival of the Vanities for DBAs
Services
A good metaphor on services at LinkedIn from Albert Barron: Pizza as a Service.
Why you should use Application Services with your Oracle Database, from The Oracle Instructor.
SQL Developer
Real Time SQL Monitoring Support in Oracle SQL Developer [Video], From that Jeff Smith.
EBS
New Innovative Learning Solution for Oracle E-Business Suite, from Oracle University.
Java
From Across the Universe: JavaOne presentations available
OEM
Using JVM Diagnostics (JVMD) to help tune production JVMs, the Oracle Enterprise Manager blog.
Security
From Securelist: Mobile Cyber-threats: A Joint Study by Kaspersky Lab and INTERPOL.
Business
10 Body Language Tips Every Speaker Must Know (Infographic), from Entrepeneur.
Books

An interesting new marketing approach: Fluent Python is being published by O’Reilly as an early release.

Just a couple of screenshots of sqlplus+rlwrap+cygwin+console

XTended Oracle SQL - Thu, 2014-10-16 14:06

I previously wrote that I peeped the idea about showing the session information in terminal title from Timur Akhmadeev’s screenshots, and Timur wrote:

I’m using (a bit modified) Tanel Poder’s login.sql available in his TPT scripts library: http://tech.e2sn.com/oracle-scripts-and-tools

Scripts:
Tanel’s i.sql
My title.sql and on_login.sql

Colored prompt is the one of many features of rlwrap.

Screenshots:
Connected as simple user:
baikal-xtender
Connected as sysdba:
xtsql-sysdba

SQL*Plus on OEL through putty:
putty-to-oel-sqlplus

@inc/title “*** Test ***”
inc-title-test

Categories: Development

Competency-Based Education: Not just a drinking game

Michael Feldstein - Thu, 2014-10-16 13:48

Ray Henderson captured the changing trend of the past two EDUCAUSE conferences quite well.

The #Edu14 drinking game: sure inebriation in 13 from vendor claims of "mooc" "cloud" or "disrupting edu". In 2014: "competency based."

— Ray Henderson (@readmeray) October 3, 2014

The drinking game: sure inebriation in 13 from vendor claims of “mooc” “cloud” or “disrupting edu”. In 2014: “competency based.”

Two years ago, the best-known competency-based education (CBE) initiatives were at Western Governors University (WGU), Southern New Hampshire University’s College for America (CfA), and SUNY’s Excelsior College. In an article this past summer describing the US Department of Education’s focus on CBE, Paul Fain noted [emphasis added]:

The U.S. Department of Education will give its blessing — and grant federal aid eligibility — to colleges’ experimentation with competency-based education and prior learning assessment.

On Tuesday the department announced a new round of its “experimental sites” initiative, which waives certain rules for federal aid programs so institutions can test new approaches without losing their aid eligibility. Many colleges may ramp up their experiments with competency-based programs — and sources said more than 350 institutions currently offer or are seeking to create such degree tracks.

One issue I’ve noticed, however, is that many schools are looking to duplicate the solution of CBE without understanding the the problems and context that allowed WGU, CfA and Excelsior to thrive. By looking at the three main CBE initiatives, it is important to note at least three lessons that are significant factors in their success to date, and these lessons are readily available but perhaps not well-understood.

Lesson 1: CBE as means to address specific student population

None of the main CBE programs were designed to target a general student population or to offer just another modality. In all three cases, their first consideration was how to provide education to working adults looking to finish a degree, change a career, or advance a career.

As described by WGU’s website:

Western Governors University is specifically designed to help adult learners like you fit college into your already busy lives. Returning to college is a challenge. Yet, tens of thousands of working adults are doing it. There’s no reason you can’t be one of them.

As described by College for America’s website:

We are a nonprofit college that partners with employers nationwide to make a college degree possible for their employees. We help employers develop their workforce by offering frontline workers a competency-based degree program built on project-based learning that is uniquely applicable in the workplace, flexibly scheduled to fit in busy lives, and extraordinarily affordable.

As described by Excelsior’s website:

Excelsior’s famously-flexible online degree programs are created for working adults.

SNHU’s ubiquitous president Paul Leblanc described the challenge of not understanding the target for CBE at last year’s WCET conference (from my conference notes):

One of the things that muddies our own internal debates and policy maker debates is that we say things about higher education as if it’s monolithic. We say that ‘competency-based education is going to ruin the experience of 18-year-olds’. Well, that’s a different higher ed than the people we serve in College for America. There are multiple types of higher ed with different missions.

The one CfA is interested in is the world of working adults – this represent the majority of college students today. Working adults need credentials that are useful in the workplace, they need low cost, they need me short completion time, and they need convenience. Education has to compete with work and family requirements.

CfA targets the bottom 10% of wage earners in large companies – these are the people not earning sustainable wages. They need stability and advancement opportunities.

CfA has two primary customers – the students and the employers who want to develop their people. In fact, CfA does not have a retail offering, and they directly work with employers to help employees get their degrees.

Lesson 2: Separate organizations to run CBE

In all three cases the use of CBE to serve working adults necessitated entirely new organizations that were designed to provide the proper support and structure based on this model.

WGU was conceived as a separate non-profit organization in 1995 and incorporated in 1997 specifically to design and enable the new programs. College for America was spun out of SNHU in 2012. Excelsior College started 40 years ago as Regents College, focused on both mastery and competency-based programs. The CBE nursing program was founded in 1975.

CBE has some unique characteristics that do not fit well within traditional educational organizations. From a CBE primer I wrote in 2012 and updated in 2013:

I would add that the integration of self-paced programs not tied to credit hours into existing higher education models presents an enormous challenge. Colleges and universities have built up large bureaucracies – expensive administrative systems, complex business processes, large departments – to address financial aid and accreditation compliance, all based on fixed academic terms and credit hours. Registration systems, and even state funding models, are tied to the fixed semester, quarter or academic year – largely defined by numbers of credit hours.

It is not an easy task to allow transfer credits coming from a self-paced program, especially if a student is taking both CBE courses and credit-hour courses at the same time. The systems and processes often cannot handle this dichotomy.

Beyond the self-paced student-centered scheduling issues, there are also different mentoring roles required to support students, and these roles are not typically understood or available at traditional institutions. Consider the mentoring roles at WGU as described in EvoLLLutions:

Faculty mentors (each of whom have at least a master’s degree) are assigned a student caseload and their full-time role is to provide student support. They may use a variety of communication methods that, depending on student preferences,include calling — but also Skype, email and even snail mail for encouraging notes.

Course mentors are the second type of WGU mentor. These full-time faculty members hold their Ph.D. and serve as content experts. They are also assigned a student caseload. Responsibilities of course mentors include creating a social community among students currently enrolled in their courses and teaching webinars focused specifically on competencies students typically find difficult. Finally, they support students one-on-one based on requests from the student or referral from the student’s faculty mentor.

Lesson 3: Competency is not the same as mastery

John Ebersole, the president of Excelsior College, called out the distinction between competency and mastery in an essay this summer at Inside Higher Ed.

On close examination, one might ask if competency-based education (or CBE) programs are really about “competency,” or are they concerned with something else? Perhaps what is being measured is more closely akin to subject matter “mastery.” The latter can be determined in a relatively straightforward manner, using various forms of examinations, projects and other forms of assessment.

However, an understanding of theories, concepts and terms tells us little about an individual’s ability to apply any of these in practice, let alone doing so with the skill and proficiency which would be associated with competence.

Deeming someone competent, in a professional sense, is a task that few competency-based education programs address. While doing an excellent job, in many instances, of determining mastery of a body of knowledge, most fall short in the assessment of true competence.

Ebersole goes on to describe the need for true competency measuring, and his observation that I share about programs confusing the two concepts..

A focus on learning independent of time, while welcome, is not the only consideration here. We also need to be more precise in our terminology. The appropriateness of the word competency is questioned when there is no assessment of the use of the learning achieved through a CBE program. Western Governors University, Southern New Hampshire, and Excelsior offer programs that do assess true competency.

Unfortunately, the vast majority of the newly created CBE programs do not. This conflation of terms needs to be addressed if employers are to see value in what is being sold. A determination of “competency” that does not include an assessment of one’s ability to apply theories and concepts cannot be considered a “competency-based” program.

Whither the Bandwagon

I don’t think that the potential of CBE is limited only to the existing models nor do I think WGU, CfA, and Excelsior are automatically the best initiatives. But an aphorism variously attributed to Pablo Picasso, Dalai Lama XIV or bassist Jeff Berlin might provide guidance to the new programs:

Know the rules well, so you can break them effectively

How many new CBE programs are being attempted that target the same student population as the parent institutions? How many new CBE programs are being attempted in the same organization structure? And how many new CBE programs are actually based on testing only of masteries and not competencies?

Judging by media reports and observations at EDUCAUSE, I think there are far too many programs attempting this new educational model of CBE as a silver bullet. They are moving beyond the model and lessons from WGU, College for America and Excelsior without first understanding why those initiatives have been successful. I don’t intend to name names here but just to note that the 350 new programs cited in Paul Fain’s article would do well to ground themselves in a solid foundation that understands and builds off of successful models.

The post Competency-Based Education: Not just a drinking game appeared first on e-Literate.