Skip navigation.

Feed aggregator

Query result cache in oracle 11g

Adrian Billington - Tue, 2014-06-24 16:58
Oracle adds a new cache for storing the results of queries. December 2007 (updated June 2014)

Virtual CPUs with Amazon Web Services

Pythian Group - Tue, 2014-06-24 15:41

Some months ago, Amazon Web Services changed the way they measure CPU capacity on their EC2 compute platform. In addition to the old ECUs, there is a new unit to measure compute capacity: vCPUs. The instance type page defines a vCPU as “a hyperthreaded core for M3, C3, R3, HS1, G2, and I2.” The description seems a bit confusing: is it a dedicated CPU core (which has two hyperthreads in the E5-2670 v2 CPU platform being used), or is it a half-core, single hyperthread?

I decided to test this out for myself by setting up one of the new-generation m3.xlarge instances (with thanks to Christo for technical assistance). It is stated to have 4 vCPUs running E5-2670 v2 processor at 2.5GHz on the Ivy Bridge-EP microarchitecture (or sometimes 2.6GHz in the case of xlarge instances).

Investigating for ourselves

I’m going to use paravirtualized Amazon Linux 64-bit for simplicity:

$ ec2-describe-images ami-fb8e9292 -H
Type    ImageID Name    Owner   State   Accessibility   ProductCodes    Architecture    ImageType       KernelId        RamdiskId Platform        RootDeviceType  VirtualizationType      Hypervisor
IMAGE   ami-fb8e9292    amazon/amzn-ami-pv-2014.03.1.x86_64-ebs amazon  available       public          x86_64  machine aki-919dcaf8                      ebs     paravirtual     xen
BLOCKDEVICEMAPPING      /dev/sda1               snap-b047276d   8

Launching the instance:

$ ec2-run-instances ami-fb8e9292 -k marc-aws --instance-type m3.xlarge --availability-zone us-east-1d
RESERVATION     r-cde66bb3      462281317311    default
INSTANCE        i-b5f5a2e6      ami-fb8e9292                    pending marc-aws        0               m3.xlarge       2014-06-16T20:23:48+0000  us-east-1d      aki-919dcaf8                    monitoring-disabled                              ebs                                      paravirtual     xen             sg-5fc61437     default

The instance is up and running within a few minutes:

$ ec2-describe-instances i-b5f5a2e6 -H
Type    ReservationID   Owner   Groups  Platform
RESERVATION     r-cde66bb3      462281317311    default
INSTANCE        i-b5f5a2e6      ami-fb8e9292       ip-10-145-209-67.ec2.internal     running marc-aws        0               m3.xlarge       2014-06-16T20:23:48+0000        us-east-1d      aki-919dcaf8                      monitoring-disabled                   ebs                      paravirtual      xen             sg-5fc61437     default
BLOCKDEVICE     /dev/sda1       vol-1633ed53    2014-06-16T20:23:52.000Z        true

Logging in as ec2-user. First of all, let’s see what /proc/cpuinfo says:

[ec2-user@ip-10-7-160-199 ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2599.998
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 1
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2599.998
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 1
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2599.998
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 1
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2599.998
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 1

Looks like I got some of the slightly faster 2.6GHz CPUs. /proc/cpuinfo shows four processors, each with physical id 0 and core id 0. Or in other words, one single-core processor with 4 threads. We know that the E5-2670 v2 processor is actually a 10-core processor, so the information we see at the OS level is not quite corresponding.

Nevertheless, we’ll proceed with a few simple tests. I’m going to run “gzip”, an integer-compute-intensive compression test, on 2.2GB of zeroes from /dev/zero. By using synthetic input and discarding output, we can avoid effects of disk I/O. I’m going to combine this test with taskset comments to impose processor affinity on the process.

A simple test

The simplest case: a single thread, on processor 0:

[ec2-user@ip-10-7-160-199 ~]$ taskset -pc 0 $$
pid 1531's current affinity list: 0-3
pid 1531's new affinity list: 0
[ec2-user@ip-10-7-160-199 ~]$ dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 17.8837 s, 121 MB/s

With the single processor, we can process 121 MB/sec. Let’s try running two gzips at once. Sharing a single processor, we should see half the throughput.

[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 35.8279 s, 60.6 MB/s
2170552320 bytes (2.2 GB) copied, 35.8666 s, 60.5 MB/s
Sharing those cores

Now, let’s make things more interesting: two threads, on adjacent processors. If they are truly dedicated CPU cores, we should get a full 121 MB/s each. If our processors are in fact hyperthreads, we’ll see throughput drop.

[ec2-user@ip-10-7-160-199 ~]$ taskset -pc 0,1 $$
pid 1531's current affinity list: 0
pid 1531's new affinity list: 0,1
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 27.1704 s, 79.9 MB/s
2170552320 bytes (2.2 GB) copied, 27.1687 s, 79.9 MB/s

We have our answer: throughput has dropped by a third, to 79.9 MB/sec, showing that processors 0 and 1 are threads sharing a single core. (But note that Hyperthreading is giving performance benefits here: 79.9 MB/s on a shared core is higher than then 60.5 MB/s we see when sharing a single hyperthread.)

Trying the exact same test, but this time, non-adjacent processors 0 and 2:

[ec2-user@ip-10-7-160-199 ~]$ taskset -pc 0,2 $$
pid 1531's current affinity list: 0,1
pid 1531's new affinity list: 0,2
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 17.8967 s, 121 MB/s
2170552320 bytes (2.2 GB) copied, 17.8982 s, 121 MB/s

All the way up to full-speed, showing dedicated cores.

What does this all mean? Let’s go back to the Amazon’s vCPU definition

Each vCPU is a hyperthreaded core

As our tests have shown, a vCPU is most definitely not a core. It’s a half of a shared core, or one hyperthread.

A side effect: inconsistent performance

There’s another issue at play here too: the shared-core behavior is hidden from the operating system. Going back to /proc/cpuinfo:

[ec2-user@ip-10-7-160-199 ~]$ grep 'core id' /proc/cpuinfo
core id         : 0
core id         : 0
core id         : 0
core id         : 0

This means that the OS scheduler has no way of knowing which processors have shared cores, and can not schedule tasks around it. Let’s go back to our two-thread test, but instead of restricting it to two specific processors, we’ll let it run on any of them.

[ec2-user@ip-10-7-160-199 ~]$ taskset -pc 0-3 $$
pid 1531's current affinity list: 0,2
pid 1531's new affinity list: 0-3
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 18.041 s, 120 MB/s
2170552320 bytes (2.2 GB) copied, 18.0451 s, 120 MB/s
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 21.2189 s, 102 MB/s
2170552320 bytes (2.2 GB) copied, 21.2215 s, 102 MB/s
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 26.2199 s, 82.8 MB/s
2170552320 bytes (2.2 GB) copied, 26.22 s, 82.8 MB/s

We see throughput varying between 82 MB/sec and 120 MB/sec, for the exact same workload. To get some more performance information, we’ll configure top to run 10-second samples with per-processor usage information:

[ec2-user@ip-10-7-160-199 ~]$ cat > ~/.toprc <<-EOF
RCfile for "top with windows"           # shameless braggin'
Id:a, Mode_altscr=0, Mode_irixps=1, Delay_time=3.000, Curwin=0
Def     fieldscur=AEHIOQTWKNMbcdfgjplrsuvyzX
        winflags=25913, sortindx=10, maxtasks=2
        summclr=1, msgsclr=1, headclr=3, taskclr=1
Job     fieldscur=ABcefgjlrstuvyzMKNHIWOPQDX
        winflags=62777, sortindx=0, maxtasks=0
        summclr=6, msgsclr=6, headclr=7, taskclr=6
Mem     fieldscur=ANOPQRSTUVbcdefgjlmyzWHIKX
        winflags=62777, sortindx=13, maxtasks=0
        summclr=5, msgsclr=5, headclr=4, taskclr=5
Usr     fieldscur=ABDECGfhijlopqrstuvyzMKNWX
        winflags=62777, sortindx=4, maxtasks=0
        summclr=3, msgsclr=3, headclr=2, taskclr=3
[ec2-user@ip-10-7-160-199 ~]$ top -b -n10 -U ec2-user
top - 21:07:50 up 43 min,  2 users,  load average: 0.55, 0.45, 0.36
Tasks:  86 total,   4 running,  82 sleeping,   0 stopped,   0 zombie
Cpu0  : 96.7%us,  3.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.0%us,  1.4%sy,  0.0%ni, 97.9%id,  0.0%wa,  0.3%hi,  0.0%si,  0.3%st
Cpu2  : 96.0%us,  4.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.0%us,  1.0%sy,  0.0%ni, 97.9%id,  0.0%wa,  0.7%hi,  0.0%si,  0.3%st

 1766 ec2-user  20   0  4444  608  400 R 99.7  0.0   0:06.08 gzip
 1768 ec2-user  20   0  4444  608  400 R 99.7  0.0   0:06.08 gzip

Here two non-adjacent CPUs are in use. But 3 seconds later, the processes are running on adjacent CPUs:

top - 21:07:53 up 43 min,  2 users,  load average: 0.55, 0.45, 0.36
Tasks:  86 total,   4 running,  82 sleeping,   0 stopped,   0 zombie
Cpu0  : 96.3%us,  3.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  : 96.0%us,  3.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.3%hi,  0.0%si,  0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.3%hi,  0.0%si,  0.3%st
Cpu3  :  0.3%us,  0.0%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.3%st

 1766 ec2-user  20   0  4444  608  400 R 99.7  0.0   0:09.08 gzip
 1768 ec2-user  20   0  4444  608  400 R 99.7  0.0   0:09.08 gzip

Although usage percentages are similar, we’ve seen earlier that throughput drops by a third when cores are shared, and we see varied throughput as the processes are context-switched between processors.

This type of situation arises where compute-intensive workloads are running, and when there are fewer processes than total CPU threads. And if only AWS would report correct core IDs to the system, this problem wouldn’t happen: the OS scheduler would make sure processes did not share cores unless necessary.

Here’s a chart summarizing the results:

aws-cpu Summing up

Over the course of the testing I’ve learned two things:

  • A vCPU in an AWS environment actually represents only half a physical core. So if you’re looking for equivalent compute capacity to, say, an 8-core server, you would need a so-called 4xlarge EC2 instance with 16 vCPUs. So take it into account in your costing models!
  • The mislabeling of the CPU threads as separate single-core processors can result in performance variability as processes are switched between threads. This is something the AWS and/or Xen teams should be able to fix in the kernel.

Readers: what has been your experience with CPU performance in AWS? If any of you has access to a physical machine running E5-2670 processors, it would be interesting to see how the simple gzip test runs.

Categories: DBA Blogs

External table enhancements in 11g

Adrian Billington - Tue, 2014-06-24 13:55
Encryption, compression and preprocessing for external tables in Oracle 11g. September 2009 (updated June 2014)

Hacking Oracle 12c COMMON Users

Pete Finnigan - Tue, 2014-06-24 13:20

The main new feature of Oracle 12cR1 has to be the multitennant architecture that allows tennant databases to be added or plugged into a container database. I am interested in the security of this of course and one element that....[Read More]

Posted by Pete On 23/07/13 At 02:52 PM

Categories: Security Blogs

OBIEE Training Site

Abhinav Agarwal - Tue, 2014-06-24 12:33
I was contacted by Seth Williams, who pointed me to this OBIEE training site -, and asked if I would link to it. There is a an online tutorial, as well as a video, on how to create KPIs using OBIEE - How To Use KPIs | OBIEE Online Training Tutorial
I think this is useful, so am posting it to my blog - which, by the way, you would have seen is not being updated regularly. Feel free to browse to the site. Do let Seth and the people at Firebox know what you think of the site and the tutorial.
Disclaimer: I am not endorsing the site or the trainings. But you know that.

OTN DBA/DEV Watercooler - NEW Blog!

OTN TechBlog - Tue, 2014-06-24 11:35

Laura Ramsey, OTN Database Community Manager, has just launched the OTN DBA/DEV Watercooler.  This blog is your official source of news covering Oracle Database technology topics and community activities from throughout the OTN Database and Developer Community. Find tips and in-depth technology information you need to master Oracle Database Administration or Application Development here. This Blog is compiled by @oracledbdev, the Oracle Database Community Manager for OTN, and features insights, tech tips and news from throughout the OTN Database Community.

Find out more about what you might hear around the OTN DBA/DEV Watercooler in Laura's inaugural post. 

Happy Reading!

Ongoing Database Security Services Provide Greater Visibility: Database Activity Monitoring Series pt. 3 [VIDEO]

Chris Foot - Tue, 2014-06-24 08:38

Hi and welcome back to the RDX blog, where we’re deep in a series about our Database Activity Monitoring services and how these services allow our customers to gain full visibility into their database activity.

We’ve previously touched on how we integrated the advanced features of McAfee’s security products to provide our customers with a 24×7 customizable Database Activity Monitoring solution that alerts customers to threats in real time.

In addition to all of that, we also provide ongoing services, such as new threat analyses, vulnerability scans, database and OS patching services and database activity monitoring reports.

Vulnerability assessments help us give you detailed information you can put into action immediately, helping you prioritize and remediate security gaps., and we schedule them on an ongoing basis to prevent future vulnerabilities. You will be notified about any unprivileged users or programs, and they will be quarantined in real time, preventing any further access into the database.

These assessments make demonstrating compliance to auditors much easier, and we’ll touch on this in our next video, the last part of our Database Activity Monitoring series. Thanks for watching, and stay tuned!

The post Ongoing Database Security Services Provide Greater Visibility: Database Activity Monitoring Series pt. 3 [VIDEO] appeared first on Remote DBA Experts.

SQL vs. NoSQL: Which is best?

Chris Foot - Tue, 2014-06-24 01:33

The manner in which information is accessed – as well as how fast it's procured – depends on the day-to-day needs of organizations. Database administration services often help businesses decide whether Not Only Structured Query Language (NoSQL) or conventional Structured Query Language is needed to optimize data-related operations. 

SQL servers, also known as relational databases (RDBMS) have been around for the longest time, with companies such as Oracle and Microsoft developing the structures. The Geek Stuff acknowledged a few key components of the technology: 

  • RDBMS are table-based structures, representing data in columns and rows
  • They possess an underlying pattern or protocol to access and read the information
  • Scaled vertically, SQL databases are accessed by increasing hardware power
  • Good for intricate, extensive queries
  • Vendors typically offer more support for RDBMS, as it is a popular, familiar solution. 

Relatively new to the sector, NoSQL runs off of unstructured query language. MongoDB, the most popular provider of NoSQL databases, explained that they were developed to better handle large sets of different data types. Primary functions of the technology are dictated below:

  • Can consist of four primary types: document, graph stores, key-value (in which every item in the database is stored with a name and its worth), or wide column
  • Do not subscribe to schemas or preset rules
  • Scaled by combining the computational power of other machines to reduce load stress – also known as "scaling out" 
  • Outside experts are hard to come by, but database support services can provide users with efficient knowledge. 

As they stand in the market 
Visual Studio Magazine referenced a survey of 500 North American software developers by Database-as-a-Service (DBaaS) company Tesora, which discovered that 79 percent of respondents were using SQL database language. The study itself focused on how the two programming interchanges were utilized by those working with private or public cloud environments. 

"Going forward, this gap can be expected to close since NoSQL databases have only been on the market for a few years or less, as opposed decades for some of the incumbents," acknowledged the report, as quoted by VSM. 

One better than the other? 
For those handling a mix of unstructured, structured and semi-structure data, NoSQL is most likely the way to go. Those managing number-based information should see major benefits from using SQL. 

However, it's important to remember that the processing power of tangible servers is increasing at a slower rate than it was ten years ago. Because NoSQL optimizes the use of these machines by pooling computing power, it may be the better choice for those worried about the future. 

The post SQL vs. NoSQL: Which is best? appeared first on Remote DBA Experts.

OOW : Catalogue des sessions

Jean-Philippe Pinte - Mon, 2014-06-23 18:30
Le catalogue des sessions OOW2014 est en ligne.

Coursera shifts focus from ‘impact on learners’ to ‘reach of universities’

Michael Feldstein - Mon, 2014-06-23 17:15

Richard Levin, the new CEO of Coursera, is getting quite clear about the new goals for the company. At first glance the changes might seem semantic in nature, but I believe the semantics are revealing. Consider this interview with the Washington Post that was published today in the Washington Post [emphasis added in both cases below]:

Richard C. Levin, the new chief executive of Coursera, the most widely used MOOC platform, wants to steer the conversation back to what grabbed public attention in the first place: the wow factor.

Sure, Levin said, the emerging technology will help professors stimulate students on campus who are tired of old-school lectures. The talk of “flipped classrooms” and “blended learning” — weaving MOOCs into classroom experiences — is not mere hype.

“But that is not the big picture,” Levin said in a visit last week to The Washington Post. “The big picture is this magnifies the reach of universities by two or three orders of magnitude.”

Contrast this interview with Daphne Koller’s December article at EdSurge:

Among our priorities in the coming year, we hope to shift the conversation around these two dimensions of the learning experience, redefine what it means to be successful, and lay the groundwork for products, offerings, and features that can help students navigate this new medium of learning to meet their own goals, whether that means completing dozens of courses or simply checking out a new subject. [snip]

Still, we are deeply committed to expanding our impact on populations that have been traditionally underserved by higher education, and are actively working to broaden access for students in less-developed countries through a range of initiatives

There are valid criticisms of how well Coursera has delivered on its goal of helping students meet their own learning goals, but now it is apparent that the focus of their efforts is shifting away from the learner and towards the institution. Below are a few notes based on these recent interviews.

Changing Direction From Founders’ Vision

This is the second interview where Levin contradicts the two Coursera founders. In the case above Levin shows the point of Coursera is not primarily impact on learners but is reach of great universities. In a New York Times interview from April he made similar points in contrast to Andrew Ng.

In a recent interview, Mr. Levin predicted that the company would be “financially viable” within five years. He began by disagreeing with Andrew Ng, Coursera’s co-founder, who described Coursera as “a technology company.”

Q. Why is the former president of Yale going to a technology company?

A. We may differ in our views. The technology is obviously incredibly important, but what really makes this interesting for me is this capacity to expand the mission of our great universities, both in the United States and abroad, to reach audiences that don’t have access to higher education otherwise.

Levin is signifying a change at Coursera, and he is not just a new CEO to manage the same business. Andrew Ng no longer has an operational role in the company, but he remains as Chairman of the Board (I’m not claiming a correlation here, but just noting the change in roles).

Reach Is Not Impact

@PhilOnEdTech Is "reach" the same as "impact"?

— Russell Poulin (@RussPoulin) June 23, 2014

The answer in my opinion is only ‘yes’ if the object of the phrase is the universities. Impact on learners is not the end goal. In Levin’s world there is a class of universities that are already “great”, and the end goal is to help these universities reach more people. This is about A) having more people understand the value of each university (branding, eyeballs) and B) getting those universities to help more people. I’m sure that B) is altruistic in nature, but Levin does not seem to focus on what that help actually comprises. Instead we get abstract concepts as we see in the Washington Post:

“That’s why I decided to do it,” Levin said. “Make the great universities have an even bigger impact on the world.”

Levin seems enamored of the scale of Coursera (8.2 million registered students, etc), but I can find no concrete statements in his recent interviews that focus on actual learning results or improvements to the learning process (correct me in the comments if I have missed some key interview). This view is very different from the vision Koller was offering in December. In her vision, Koller attempts to improve impact on learners (the end) by using instruction from great university (the means).

Other People’s Money

Given this view of expanding the reach of great universities, the candor about a lack of revenue model is interesting.

“Nobody’s breathing down our necks to start to turn a profit,” he said. Eventually that will change.

Levin said, however, that “a couple” universities are covering their costs through shared revenue. He declined to identify them.

This lack of priority on generating a viable revenue model is consistent with the pre-Levin era, but what if you take it to its logical end with the new focus of the company? What we now have is a consistent story with AllLearn and Open Yale Courses – spending other people’s money to expand the reach of great universities. Have we now reached the point where universities that often have billion-dollar endowments are using venture capital money to fund part of their branding activities? There’s a certain irony in that situation.

It is possible that Levin’s focus will indirectly improve the learning potential of Coursera’s products and services, but it is worth noting a significant change in focus from the largest MOOC provider.

The post Coursera shifts focus from ‘impact on learners’ to ‘reach of universities’ appeared first on e-Literate.

Extended Support for PeopleSoft Interaction Hub Ending--Time to Upgrade Soon!

PeopleSoft Technology Blog - Mon, 2014-06-23 16:58

Extended support for the PeopleSoft Interaction Hub will be ending in October 2014.  Sustaining support will still be available, but if you are an Interaction Hub (aka Portal) customer, you should consider upgrading to the latest release before that time.  The 9.1/Revision 3 release will be available soon after PeopleTools 8.54 is released, so customers considering an upgrade may wish to move to Revision 3 when it becomes available.

See this document for general information on Oracle/PeopleSoft's Lifetime support program.

Parallel Execution Skew - Addressing Skew Using Manual Rewrites

Randolf Geist - Mon, 2014-06-23 13:33
This is just a short note that the next part of the mini series about Parallel Execution skew has been published at

After having shown in the previous instalment of the series that Oracle 12c added a new feature that can deal with Parallel Execution skew (at present in a limited number of scenarios) I now demonstrate in that part how the problem can be addressed using manual query rewrites, in particular the probably not so commonly known technique of redistributing popular values using an additional re-mapping table.

PeopleSoft Interaction Hub Release Value Proposition for Revision 3

PeopleSoft Technology Blog - Mon, 2014-06-23 12:40

We've just published the Release Value Proposition for the PeopleSoft Interaction Hub 9.1/Revision 3.  The RVP provides an overview of the new features and enhancements planned for the upcoming release, which is aligned with PeopleTools 8.54 . The release value proposition is intended to help you assess the business benefits of upgrading to the latest release and to plan your IT projects and investments.

The highlights of the RVP cover the following subjects in the upcoming release:

  • Branding
  • PeopleSoft Fluid User Interface updates
  • Interaction hub cluster setup improvements
  • Simplified content creation and publication
  • WCAG 2.0 adoption

Look for the availability of this release in the near future.

Contributions by Angela Golla, Infogram

Oracle Infogram - Mon, 2014-06-23 11:35
Contributions by Angela Golla, Infogram Deputy Editor

Oracle Database In-Memory Option

On June 10, Larry Ellison unveiled the new Oracle Database In-Memory option. Oracle Database In-Memory delivers outstanding analytical performance without the need to restrict functionality or accept compromises, complexity and risk. Deploying Oracle Database In-Memory with any existing Oracle Database compatible application is as easy as flipping a switch—no application changes are required. It is fully integrated with Oracle Database’s renowned scale-up, scale-out, storage tiering, availability and security technologies making it the most industrial-strength offering in the industry.

At a special event at Oracle’s headquarters, CEO Larry Ellison described how the ability to combine real-time data analysis with sub-second transactions on existing applications enables organizations to become Real-Time Enterprises that quickly make data-driven decisions, respond instantly to customer’s demands, and continuously optimize key processes.

Oracle Database In-Memory is scheduled for general availability in July and can be used with all hardware platforms on which Oracle Database 12cis supported.  Learn more at the Oracle Database In-Memory home page.

“Personalized Learning” Is Redundant

Michael Feldstein - Mon, 2014-06-23 11:13

Dan Meyer has just published a provocative post called “Don’t Personalize Learning,” inspired by an even more provocative post with the same title by Benjamin Riley (as well as being a follow-up to Meyer’s post “Tools for Socialized Instruction not Individualized Instruction“). Part of the confound here is sloppy terminology. Specifically, I think the term “personalized learning” doesn’t really mean anything, so it’s hard to have an intelligent conversation about it.

All learning is personalized in virtue of the fact that it is accomplished by a person for him or herself. This may seem like a pedantic point, but if the whole point of creating the term is to focus on fitting the education to the student rather than the other way around, then it’s important to be clear about agency. What we really want to talk about, I think, is “personalized education” or, more specifically, “personalized instruction.” Here too we need to be thoughtful about what we mean by “personalized.” To me, “personalized” means “to make more personal,” which has to do with the goals and desires of the person in question. If I let you choose what you want to learn and how you want to learn it, those are aspects of personalization. Riley argues that radical personalization, where students make all the choices, isn’t necessarily a good thing, for several reasons. One reason he gives is that learning is cumulative and students are not likely to stumble upon the correct ordering by themselves. He asserts that teaching was invented “largely to solve for that problem.” I agree that one of the main values of a teacher is to help students find good learning paths, but I disagree that students are unlikely to find good paths themselves. Teachers can help students optimize, but the truth is that people learn all sorts of things all the time on their own. Teaching is about the zone of proximal development; it’s about helping students learn (and discover) those things that they are not quite ready to learn on their own but can learn with a little bit of help. That’s not the same thing at all as saying that humans aren’t good at constructing good learning experiences for themselves (which is what you get if you take Ryan’s argument to its logical conclusion). Also, I believe in the value of curriculum, but it’s a bit of a straw man to suggest that personalized learning must mean that students decide everything, for themselves and on their own.

And I vehemently disagree with him when he writes,

Second, the problem with the pace argument is that it too contradicts one of the key insights from cognitive science: our minds are not built to think. In fact, our brains are largely oriented to avoid thinking. That’s because thinking is hard. And often not fun, at least at first. As a result, we will naturally gravitate away from activities that we find hard and unpleasant.

Frankly, I think he draws exactly the wrong conclusion from the research he cites. I would say, rather, that we are most inclined to think about things that inspire a sense of fun. We like stories and puzzles. But which stories and which puzzles we like is…well…personal. If you want to to get humans to think on a regular basis, then you have to make it personal to them. My own experience as both a teacher and a learner is that if a person is personally engaged then he or she can often learn quite quickly and eagerly. The same cannot often be said of somebody who is personally disengaged. Of course, one can be personally engaged without having a personalized learning experience, if by the latter you mean that the student chooses the work. But the point I made at the top of the post is that “personal” is inherent to the person. The student may not decide what work to do, but she and only she always decides whether or not to engage with that work. When the work is not personalized, a good teacher is always performing in acts of persuasion, trying to help students find personal reasons to engage.

Meyer is latching onto something different. By “personal” he seems to mean “solitary,” and I interpret him to be responding specifically to adaptive systems, which are often labeled “personalized learning” (as well as “new and improved” and “99.44% pure”). First of all, in and of themselves, adaptive systems are often not personalized in the sense that I described above. They are customized, in that they respond to the individual learner’s knowledge and skill gaps, but they are not personalized. Customized solitary instruction has its place, as I described in my post about what teachers should know about adaptive systems. Customized instruction can also be personalized—for example, students can choose their path down a skill tree on Khan Academy. But I think Dan’s main point is that many of the more interesting and potent learning experiences tend to happen when humans talk with other intelligent humans. We learn from each other, traveling down paths that machines can’t take us yet (and probably won’t be able to for quite a while). It is possible for a learning experience to be simultaneously social and personalized, for example, when students individually individually work on problems they choose that are interesting to them but then discuss their ideas and solutions with their classmates.

So, to sum up:

  1. Humans are generally pretty good at learning what they want to learn (but can get stuck sometimes).
  2. Help from good teachers can enable humans to learn more effectively than they can on their own in many cases.
  3. Sometimes solitary study can be helpful, particularly for practicing weak skills.
  4. Conversations with other humans often lead to rich, powerful, and personal learning experiences that are difficult or impossible to have on one’s own.
  5. All learning is personal. Some instruction is personalized to a student’s individual interests and choices, and some is customized to a students individual skills and knowledge. Some is both and some is neither.
  6. Personalized instruction may or may not include social learning activities.
  7. Customized instruction may or may not include some personalization.

Why do we make this stuff so complicated?

The post “Personalized Learning” Is Redundant appeared first on e-Literate.

The Art of War for Small Business

Surachart Opun - Mon, 2014-06-23 10:51
The Art of War is an ancient Chinese military treatise attributed to Sun Tzu, a high-ranking military general, strategist and tactician. A lot of books have written by using Sun Tzu's ancient The Art of War and adaptation for military, political, and business.

The Art of War for Small Business Defeat the Competition and Dominate the Market with the Masterful Strategies of Sun Tzu, this is a book was applied the Art of War for small business. So, it's a perfect book for small business owners and entrepreneurs entrenched in fierce competition for customers, market share, talent and etc. In a book, it was written with 4 parts with 224 pages - SEIZE THE ADVANTAGE WITH SUN TZU, UNDERSTANDING: ESSENTIAL SUN TZU, PRINCIPLES FOR THE BATTLEFIELD, ADVANCED SUN TZU: STRATEGY FOR YOUR SMALL.
It's not much pages for read and it begins with why the art of war should be used with the small business and gives lot of examples and idea how to apply the art of war with the small business and use it everyday (It helps how to Choose the right ground for your battles, Prepare without falling prey to paralysis, Leverage strengths while overcoming limitations, Strike competitors' weakest points and seize every opportunity, Focus priorities and resources on conquering key challenges, Go where the enemy is not, Build and leverage strategic alliances).

After reading, readers should see the picture of  the common advantages and disadvantages in the small business and why the small business needs Sun Tzu. In additional, Readers will learn the basic of the art of war and idea to apply with the small business. It shows the example by giving the real world of small business.

Written By: Surachart Opun
Categories: DBA Blogs

Who is Dave Gray and What is a Connected Company?

WebCenter Team - Mon, 2014-06-23 09:28

by Dave Gray, Entrepreneur, Author & Consultant

Who is Dave Gray?

I’m an entrepreneur and designer who has worked on change and innovation initiatives for the last thirty years. I’ve worked with startups and Fortune 100 companies. I’ve worked with companies in finance, energy, defense, technology, media, education, health care, automotive and more. I’ve seen a lot of things in that time, including some amazing successes as well as some catastrophic failures.

In my work with organizations, including growing my own company, I came to see that there are two factors which have the greatest impact on how well an organization can innovate and change. The first is organizational structure, by which I mean how the work is organized, and the way it distributes information and control. The second is organizational culture, by which I mean the habits, behaviors, and informal rules that add up to “the way we do things around here.”

The structure is the organization’s shape and form, while the culture is the life force that animates it. Culture and structure mutually reinforce each other, and the relationship between them is complex.

I have come to believe that both culture and structure can be designed in such a way that an organization can be much more agile and adaptable, so change and innovation come much more easily than they do in a typical organization. Which brings me to the next question: what is a connected company?

What is a Connected Company?
Historically, we have thought of companies as machines, and we have designed them like we design machines. Most companies are conceived and designed this way.

A car is a perfect example of machine design. It’s designed to do one thing and does that thing pretty well. It’s controlled by a driver. Mechanics perform routine maintenance and fix it when it breaks down. Eventually the car wears out, or your needs change, so you sell the car and buy a new one.

If one day you need a truck, or a motorcycle for some reason, the car is not going to adapt to your needs. The car is going to stay a car.

And we tend to design companies the way we design machines: We need the company to perform a certain function, so we design and build it to perform that function. 

The machine view is very successful in a stable environment. If there is a steady, predictable demand for a standard, uniform product, then machines are very efficient and productive. In such conditions, a machine-like company can profit by producing uniform items in large lots.

But over time, things change. The company grows beyond a certain point. New systems are needed. Customers want different products and services. So we redesign and rebuild the machine to serve the new functions. 

This kind of rebuilding goes by many names, including re-organization, reengineering, right-sizing, flattening and so on. The problem with this kind of thinking is that the nature of a machine is to remain static, while the nature of a company is to grow. This conflict causes all kinds of problems because you have to constantly redesign and rebuild the company while at the same time you are trying to operate it. Ironically, the process of improving efficiency is often very inefficient. And the faster things change the more of a problem this becomes.

Companies are not really machines, so much as complex, dynamic, growing systems. After all, companies are really just groups of people who have banded together to achieve some kind of purpose. 

A machine’s purpose is designed into its structure. Once a machine’s purpose has been set, it does what it has been designed to do. But if the environment changes, a machine does not have a way to become aware of the change and adjust to the new situation. It just becomes obsolete.

Organisms, on the other hand, control themselves. An organism’s purpose does not come from an outside designer or controller but from within. An organism strives over time to realize its intentions in the world. As conditions in the environment change, an organism responds by adjusting its behavior and improving its performance over time. In other words, it learns.

Now before we had cars we got around using horses. And a horse is a very adaptable kind of transportation. If you were going into a place where you didn’t know if you were going to have roads, or gasoline, well then a horse might very well be a better choice than a car.

And the business world these days is being continually disrupted by new technologies, new ways of communicating and sharing information. It’s a lot more uncertain and unpredictable, which is why a more adaptable, organic approach gives you more flexibility to adapt as things change.

A connected company is one whose culture and structure are designed to continually learn and adapt to a changing marketplace. It is designed more like an organism and less like a machine. Connected companies distribute information and control differently. They organize work differently.

Instead of a hierarchy like you might see in a typical organization chart, a connected company is organized in what I call a podular way. It operates as a network of small, self-directed teams that are supported by platforms and connected by some kind of common purpose. Amazon and Google are organized in this way, as are many others. 

Teams that are independent and self-directed can learn and adapt more rapidly than their counterparts in divided organizations, because they don’t have to worry about complicated processes and procedures. They don’t have to get permission from a boss before they act. They interface with other teams through a simple network. This makes it possible to move much faster, make faster decisions and learn faster. This kind of organization is more entrepreneurial.

Think of a shopping mall or a commercial district in a city. The city doesn’t tell people which businesses to operate, they create a space and provide infrastructure which gets filled in with entrepreneurs. This is the core of how connected companies operate. They provide a space and a supporting platform that attracts a more entrepreneurial kind of person.

It’s Time to Connect
Adaptation requires learning. Learning requires the freedom to experiment. Today’s business environment is uncertain and variable. It’s impossible to know in advance what kinds of actions will constitute good performance. By giving their employees the freedom to make decisions, connected companies learn and move faster. While others analyze risk, connected companies seize opportunities. While others work in isolation, they link into rich networks of possibility and expand their influence. While others plan, they act.

Connected customers are already demanding more than divided, industrial-age companies can deliver. I’m convinced that as we move toward a more complex, connected, customer-centric world, the businesses that will win will be the connected companies.

Learn more about The Connected Company and Dave in this podcast, and hear more from Dave in our upcoming Digital Business Thought Leadership Series webcast "The Digital Experience: A Connected Company’s Sixth Sense".

Working in Pythian’s Advanced Technology Consulting Group

Pythian Group - Mon, 2014-06-23 08:22

Before I joined Pythian, I had the fortune of having a lot of good jobs across various industries. My favorite jobs were the ones that were fast paced and required me to ramp up my skills on the fly while learning new technology. My least favorite jobs were the ones where my skills were not used and the challenges were few and far between. When I joined Pythian I hadn’t realized I found my first great job.

In April 2012, I joined Pythian’s Professional Consulting Group (PCG). The members of PCG represented some of the world’s leading data experts, but the name did not adequately represent the skills of the members. Members of PCG were experts in many complementary technologies and many, if not all, were quickly becoming experts in emerging technologies such as Big Data. Because of this, the Professional Consulting Group became the Advanced Technology Consulting Group (ATCG).

As a member of ATCG, my main responsibility is to deliver consulting services to our customers either on site or remotely. Examples of some of the work we might do include: troubleshooting performance problems, migrating databases into Exadata, setting up replication with Oracle GoldenGate, and data integration with numerous sources using Oracle Data Integrator. While all of the items I mentioned deal with Oracle technologies, ATCG also has members who specialize in Microsoft SQL Server and MySQL.

The services we provide to our customers do not stop at traditional database services, ATCG also delivers Big Data services using Hadoop. Examples of some of the Hadoop work I have been involved with include: installing and configuring Cloudera Hadoop, securing Hadoop with Kerberos, and troubleshooting performance. As you can see, ATCG has the opportunity to gain valuable experience across a broad range of technologies.

All of our projects begin with a call with the potential customer. Members of ATCG serve as a technical resource on the call. It is our responsibility to understand the customer’s issue and estimate the effort required to perform the work. Sometimes this can be challenging because the customer might not have a technical resource on their side who can articulately convey the issue. Even if there is a technical resource on the customer’s side, we have to be mindful to not alienate others on the call, so it is vitally important that we are able to convey our message in way everybody on the call can understand.

You might be thinking “I am not a salesperson!” and “I have never used some of these technologies.” You would not be alone. ATCG are not sales people, we simply assist Sales by providing our technical knowledge on a call. Imagine that you are speaking with your boss or customer about a problem or issue – It really is no different. Dealing with new technology is little different at Pythian from your current job; If you don’t understand something, you can talk to a few coworkers or research on the net at your current job. At Pythian we can reach out to 200+ coworkers and find quite a few who have experience with the technology in question. We can search our internal technical documents, which are quite vast as they detail all of the work we have done, and as a last resort we can search the net. At Pythian, you are never alone and you are never without resources.

There are times when we might not have a project to work on, a.k.a. downtime. During our downtime, we can build our knowledge of technologies that we have interest in or that we may need a refresher for. We can practice our new found knowledge assisting other teams. We can help build the Pythian knowledge base by posting blogs and contributing to our internal documentation.

The work in ATCG is very challenging and you are always learning something new, whether it is a new technology or a new way of thinking about a particular topic. Being bored or pigeonholed is not a problem in ATCG; we are involved in some of toughest problems and work with the latest technologies. And when we are not, we are in control of our workday so we can pursue interests in new and emerging database technologies.

Categories: DBA Blogs

Integration Hub – Branding

Kasper Kombrink - Mon, 2014-06-23 04:57
The Integration Hub has come a long way since I first saw it as the Enterprise Portal 8.8. The biggest selling point in my opinion has always been the branding features. Even though the options never really changed, they did evolve

Continue reading

Lecture : Oracle Magazine Juillet / Août 2014

Jean-Philippe Pinte - Sun, 2014-06-22 19:11
L'Oracle Magazine  de Juillet / Août 2014 est disponible.