Skip navigation.

Feed aggregator

Literally speaking

Gary Myers - Fri, 2014-06-20 21:14
Reading Scott Wesley's blog from a days ago, and he made a remark about being unable to concatenate strings when using the ANSI date construct.

The construct date '1900-01-01' is an example of a literal, in the same way as '01-01' is string literal and 1900 is a numeric literal. We even have use some more exotic numeric literals such as 1e3 and 3d .

Oracle is pretty generous with implicit conversions from strings to numbers and vice versa, so it doesn't object when we assign a numeric literal to a CHAR or VARCHAR2 variable, or a string to a NUMBER variable (as long as the content is appropriate). We are allowed to assign the string literal '1e3' to a number since the content is numeric, albeit in scientific notation.

So there are no problems with executing the following:
declare
  v number := '1e3';
begin
  dbms_output.put_line(v);
end;
/

However while 3d and 4.5f can be used as numeric literals, Oracle will object to converting the strings '3d' or '4.5f' into a number because the 'f' and 'd' relate to the data type (Binary Float and Binary Double) and not to the content.

Similarly, we're not allowed to try to use string expressions (or varchar2/char variables) within a date literal, or the related timestamp literal. It must be the correct sequence of numbers and separators enclosed by single quotes. It doesn't complain if you use the alternative quoting mechanism, such as date q'[1902-05-01]' but I'd recommend against it as being undocumented and superfluous.

Going further, we have interval literals such as interval '15' minute .In these constructs we are not allowed to omit the quotes around the numeric component. And we're not allowed to use scientific notation for the 'number' either (but again the alternative quoting mechanism is permitted). 

I've built an affection for interval literals, which are well suited to flashback queries.

select versions_operation, a.* 
from test versions between timestamp sysdate - interval '1' minute and sysdate a;

Confusingly the TIMESTAMP keyword in the query above is part of the flashback syntax, and you have to repeat the word if you are using a timestamp literal in a flashback query. 

select versions_operation, a.*

from test versions between timestamp timestamp '2014-06-21 12:50:00' 
                   and sysdate a


Learning From The Earnings Call

Floyd Teter - Fri, 2014-06-20 15:36
So now that we've heard the Oracle FY14 Q4 financial results, what did we learn?  Most noticeably, Oracle continues to be a company in transition...and that transition is beginning to take hold.  Changing a company and an ecosystem as large as Oracle's is like turning a battleship...it takes time.  But we have some pretty clear evidence that the battleship is turning.

Let's look at the numbers for a minute.  Oracle's SaaS and PaaS for FY14 came in at $1.12B (US).  That's up 23% from last year.  Heck, they're going that part of the business like a weed.  Oracle is already well past Workday in terms of annual revenue (although the comparison is a bit unfair because Workday isn't really in the PaaS business, so let's not count them out by any means), and is approaching about 50% of the latest SaleForce revenue numbers.  Comparatively speaking, Oracle is an up-and-comer in the space.

But the transition is not without some pain.  New software licenses were flat for the quarter and earnings-per-shared missed expectations.  That tells me two things:  1) some of that cloud growth is at the sacrifice of software licensing deals; 2) Oracle has yet to figure out how to make cloud services as profitable as license deals.  We're likely to see the second point work itself out as the pressure for margins ramps up through existing Oracle customers take advantage of the Customer 2 Cloud program.

So what we're seeing is the continuing transition of Oracle into a SaaS and PaaS provider, but with some speed bumps along the way...to be expected in any transition, especially one of this magnitude.  So, you may ask, what's driving the growth?

IMHO, Oracle offers three factors that differentiate their SaaS and PaaS offerings from other market competitors:

1) A database that offers the benefits of multi-tenancy without commingling data.  That's a huge advantage in overcoming security fears of many potential cloud customers.  And, from what I can tell, security fears are the biggest objection for most potential cloud customers.

2) A very well-design User Experience with Simplified UI.  While there's still work to be done (let's build responsiveness into ADF, the tool used to build the UX, so that we can build once for all platforms), Oracle's UX has built a big differentiator in the marketplace.  And offering up the templates and design patterns so that developers can build their own apps with the same UX is a great approach.

3) Deep pockets.  Check out that Oracle balance sheet.  With all that cash, Oracle can afford to invest in growing their cloud business.  That includes investing in strategic customer accounts.

So while Oracle's Q4 report could have been better, I personally saw what I was looking for: tangible signs of progress in Oracle's transition.  Learned from the earnings call.  Ya'all go ahead and sell that stock.  Poo-poo all over the outlook.  Whatever.  The battleship is turning.

DISCLAIMER:  I hold 10 shares of Oracle stock that I acquired while working for Oracle in the 90's.  I keep it just for sentimental value.  I'm admittedly a big fan of Oracle technology.  And while I'm not always a fan of Oracle's business tactics, I think they're a very smart company from a strategic perspective.  So, it's true...I'm biased.  Now you know.

APEX 5.0 - Page Designer; immediate feedback and more

Dimitri Gielis - Fri, 2014-06-20 14:30
In APEX 5.0 you (can) develop in the new Page Designer.

The Page Designer makes you way more productive, less clicks and quicker results. You have to get used to it, and you probably want a big monitor (time to ask your boss!), but once all that is done - you will love it.

The Page Designer is so intuitive and attention was put in the details. When you make a mistake APEX gives you immediate feedback. Here's a screenshot:


The region where the error is, is highlighted.
You get a notification message top right in red with the error message and inside the property panel it's highlighted what you need to change. Once you click on the field it will give another text notification e.g. that it is required.

There's also the Messages tab which gives you an explanation of what is wrong. Clicking on the link will bring you right where you need to go.

But just look at the Page Designer for a while; notice the small top left red triangles; it identifies it's a required field. The "Show Common" and "Show All" tabs are great too.

So many things, small, large, ... but so useful.

Here's another one - Developer Comments for the page. If there are comments you see a number in the comment icon. When clicking on the icon you can add more comments. I believe it would also be useful to see the existing comments, hopefully that will be in the final release.


This post is based on Oracle APEX 5.0 EA2, but there's more coming it looks like. Linked to the previous feature, I see a tighter integration with Team Development already too.

So many things to explore in the APEX 5.0 Page Designer... definitely worth your time.
Categories: Development

What is Database Activity Monitoring?: Database Activity Monitoring Series Kick-off [VIDEO]

Chris Foot - Fri, 2014-06-20 13:56

Today we're kicking off a series on Database Activity Monitoring. As your database administrators, safeguarding customer data is our highest priority. That’s why we offer 24×7 Database Activity Monitoring services, which allow organizations to gain full visibility into all database activity.

At RDX, we’ve partnered with McAfee, the world’s largest dedicated security company, to bring our customers the highest level of database activity monitoring. RDX has integrated the features and functionality provided by McAfee’s database security products into its support environment to give our clients visibility into all database activity, including local privileged access and sophisticated attacks from within the database itself.

Not only that, we help you save money on a security monitoring support architecture, because our Proactive Monitoring and Response Center provides 24X7, real-time security alert monitoring and support by around-the-clock staff members who are onsite, onshore, and 100 percent dedicated to protecting your organization's core assets..

This constant monitoring also helps us receive alerts of attacks in real time and terminate sessions that violate predetermined security policies.

We customer tailor a database activity monitoring solution to fit each customer’s unique needs – which we'll touch on in our next video!
 

The post What is Database Activity Monitoring?: Database Activity Monitoring Series Kick-off [VIDEO] appeared first on Remote DBA Experts.

What is Database Activity Monitoring?: Database Activity Monitoring Series Kick-off [VIDEO]

Chris Foot - Fri, 2014-06-20 13:56

Today we're kicking off a series on Database Activity Monitoring. As your database administrators, safeguarding customer data is our highest priority. That’s why we offer 24×7 Database Activity Monitoring services, which allow organizations to gain full visibility into all database activity.

At RDX, we’ve partnered with McAfee, the world’s largest dedicated security company, to bring our customers the highest level of database activity monitoring. RDX has integrated the features and functionality provided by McAfee’s database security products into its support environment to give our clients visibility into all database activity, including local privileged access and sophisticated attacks from within the database itself.

Not only that, we help you save money on a security monitoring support architecture, because our Proactive Monitoring and Response Center provides 24X7, real-time security alert monitoring and support by around-the-clock staff members who are onsite, onshore, and 100 percent dedicated to protecting your organization's core assets.

This constant monitoring also helps us receive alerts of attacks in real time and terminate sessions that violate predetermined security policies.

We customer tailor a database activity monitoring solution to fit each customer’s unique needs – which we'll touch on in our next video!
 

The post What is Database Activity Monitoring?: Database Activity Monitoring Series Kick-off [VIDEO] appeared first on Remote DBA Experts.

Customizing a Database Activity Monitoring Solution: Database Activity Monitoring Series pt. 2 [VIDEO]

Chris Foot - Fri, 2014-06-20 13:32

Real-time monitoring means constant protection from potential threats, and at RDX we customize database activity monitoring to fit our customers’ unique security requirements.

First, we hold fact finding meetings during the customer integration process to learn our customers’ database security requirements and internal practices. Then we educate our customers on the installation and configuration of the security monitoring architecture which utilizes an RDX supplied security appliance.

Next, we work with our customers to determine which event notifications and escalation procedures are best for their database environments. They can set notification rules about the time of day a database is accessed, certain users who access it, and the computers and programs used to access it, among hundreds of other customizable parameters.

After implementation, our team of dedicated professionals provide 24×7, 100% onshore monitoring of your database environments and will alert you to any activities that violate your predetermined security parameters.

We also provide our customers with ongoing database security services. Find out more about these in our next video! 

The post Customizing a Database Activity Monitoring Solution: Database Activity Monitoring Series pt. 2 [VIDEO] appeared first on Remote DBA Experts.

I Guess Wearables Are a Thing

Oracle AppsLab - Fri, 2014-06-20 11:17

For what seems like ages, the noise around wearable technology has been building, but until recently, I’ve been skeptical about widespread adoption.

Not anymore, wearables are a thing, even without an Apple device to lead the way.

Last week, Noel (@noelportugal) and I attended the annual conference of the Oracle HCM Users Group (@ohugupdates); the Saturday before the conference, we showed off some of our wearable demos to a small group of customers in a seminar hosted by Oracle Applications User Experience.

As usual, we saturated the Bluetooth spectrum with our various wearables.

BpuY09ECEAIL8yy

This doesn’t even include Noel’s Glass and Pebble.

The questions and observations of the seminar attendees showed a high level of familiarity with wearables of all types, not just fitness bands, but AR glasses and other, erm, wearable gadgets. A quick survey showed that several of them had their own wearables, too.

Later in the week, chatting up two other customers, I realized that one use case I’d thought was bogus is actually real, the employee benefits plus fitness band story.

In short, employers give out fitness bands to employees to promote healthy behaviors and sometimes competition; the value to the organization comes from an assumption that the overall benefit cost goes down for a healthier employee population. Oh, and healthy people are presumably happier, so there’s that too.

At a dinner, I sat between two people, who work for two different employers, in very different verticals; they both were wearing company-provided fitness trackers, one a Garmin device, the other a FitBit. And they both said the devices motivated them.

So, not a made-up use case at all.

My final bit of anecdotal evidence from the week came during Jeremy’s (@jrwashley) session. The room was pretty packed, so I decided to do some Bluetooth wardriving using the very useful Bluetooth 4.0 Scanner app, which has proven to be much more than a tool for finding my lost Misfit Shine.

From a corner of the room, I figured my scan covered about a third of the room.

bluetoothWarDriving

That’s at least six wearables, five that weren’t mine. I can’t tell what some of the devices are, e.g. One, and devices like Google Glass and the Pebble watch won’t be detected by this method. We had about 40 or so people in the room, so even without scanning the entire room, that’s a lot of people rocking wearables.

If you’re not impressed by my observations, maybe some fuzzy app-related data will sway you. From a TechCrunch post:

A new report from Flurry Analytics shows that health and fitness apps are growing at a faster rate than the overall app market so far in 2014. The analytics firm looked at data from more than 6,800 apps in the category on the iPhone and iPad and found that usage (measured in sessions) is up 62% in the last six months compared to 33% growth for the entire market, an 87% faster pace.

This data comes just as Apple and Google aim to boost the ecosystem for fitness apps and wearables with HealthKit and Google Fit, both of which aim to make it easy for wearable device manufacturers to share their data and app developers to use that data to make even better apps.

Of course, if/when Apple and Google make their plays, wearables will only get more prevalent.

So, your thoughts, about wearables, your own and other people’s, corporate wellness initiatives, your own observations, belong in the comments.Possibly Related Posts:

Ambari Blueprints and One-Touch Hadoop Clusters

Pythian Group - Fri, 2014-06-20 11:11

For those who aren’t familiar, Apache Ambari is the best open source solution for managing your Hadoop cluster: it’s capable of adding nodes, assigning roles, managing configuration and monitoring cluster health. Ambari is HortonWorks’ version of Cloudera Manager and MapR’s Warden, and it has been steadily improving with every release. As of version 1.5.1, Ambari added support for a declarative configuration (called a Blueprint) which makes it easy to automatically create clusters with many ecosystem components in the cloud. I’ll give an example of how to use Ambari Blueprints, and compare them with existing one-touch deployment methods for other distributions.

Why would I want that?

I’ve been working on improving the methodology used by the Berkeley Big Data Benchmark. Right now spinning up the clusters is a relatively manual process, where the user has to step through the web interfaces of Cloudera Manager and Ambari, copy-paste certificates and IPs, and assign roles to nodes. The benchmark runs on EC2 instances, so I’ve been focused on automatic ways to create clusters on Amazon:

  • Apache Whirr can create a Hadoop cluster (or a number of other Big Data technologies), including CDH5, MapR and HDP. Documentation is sparse, and there doesn’t appear to be support for installing ecosystem projects like Hive automatically.
  • Amazon EMR supports installing Hive and Impala natively, and other projects like Shark via bootstrap actions. These tend to be older versions which aren’t suitable for my purposes.
  • MapR’s distribution is also available on EMR, but I haven’t used that since the different filesystem (MapRFS vs. HDFS) would impact results.

Hive-on-Tez is only supported on HDP at the moment, so it’s crucial that I have a one-touch command to create both CDH5 clusters, but also HDP clusters. Ambari Blueprints provide a crucial piece of the solution.

The Blueprint

Blueprints themselves are just JSON documents you send to the Ambari REST API. Every Ambari Blueprint has two main parts: a list of “host groups”, and configuration.

Host Groups

Host groups are a set of machines with the same agents (“components” in Ambari terms) installed – a typical cluster might have host groups for the NameNode, SecondaryNameNode, ResourceManager, DataNodes and client nodes for submitting jobs. The small clusters I’m creating have a “master” node group with the NameNode, ResourceManager, and HiveServer components on a single server and then a collection of “slaves” running the NodeManager and DataNode components. Besides a list of software components to install, every host group has a cardinality. Right now this is a bit of a pain, since the cardinality is exact: your blueprint with 5 slave nodes must have 5 slaves Hopefully the developers will add an option for “many”, so we don’t have to generate a new blueprint for every different sized cluster.  Thanks to John from HortonWorks for a correction, cardinality is an optional hint which isn’t validated by Ambari. This wasn’t clear from the docs.

To provide a concrete example, the sample host groups I’m using look like this:

"host_groups" : [
 {
 "name" : "master",
 "components" : [
 {
 "name" : "NAMENODE"
 },
 {
 "name" : "SECONDARY_NAMENODE"
 },
 {
 "name" : "RESOURCEMANAGER"
 },
 {
 "name" : "HISTORYSERVER"
 },
 {
 "name" : "ZOOKEEPER_SERVER"
 },
 {
 "name" : "HIVE_METASTORE"
 },
 {
 "name" : "HIVE_SERVER"
 },
 {
 "name" : "MYSQL_SERVER"
 }
 ],
 "cardinality" : "1"
 },
{
 "name" : "slaves",
 "components" : [
 {
 "name" : "DATANODE"
 },
 {
 "name" : "HDFS_CLIENT"
 },
 {
 "name" : "NODEMANAGER"
 },
 {
 "name" : "YARN_CLIENT"
 },
 {
 "name" : "MAPREDUCE2_CLIENT"
 },
 {
 "name" : "ZOOKEEPER_CLIENT"
 },
 {
 "name" : "TEZ_CLIENT"
 },
 {
 "name" : "HIVE_CLIENT"
 }
 ],
 "cardinality" : "5"
 }

This host_groups describes a single node with all of the “master” components installed, and five slaves with just the DataNode, NodeManager and clients installed. Note that some components have depedencies: it’s possible to build an invalid blueprint which contains a HIVE_METASTORE but not a MYSQL_SERVER. The REST API provides appropriate error messages when such a blueprint is submitted.

Configuration

Configuration allows you to override the defaults for any services you’re installing, and it comes in two varieties: global, and service-specific. Global parameters are required for different services: to my knowledge Nagios and Hive require global parameters to be specified – these parameters apply to multiple roles within the cluster, and the API will tell you if any are missing. Most cluster configuration (your typical core-site.xml, hive-site.xml, etc. parameters) can be overriden in the blueprint by specifying a configuration with the leading part of the file name, and then providing a map of the keys to overwrite. The configuration below provides a global variable that Hive requires, and it also overrides some of the default parameters in hive-site.xml. These changes will be propagated to the cluster as if you changed them in the Ambari UI.

"configurations": [
  {
    "global": {
      "hive_metastore_user_passwd": "p"
    }
  },
  {
    "hive-site": {
      "javax.jdo.option.ConnectionPassword": "p",
      "hive.security.authenticator.manager": "org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator",
      "hive.execution.engine": "tez",
      "hive.exec.failure.hooks": "",
      "hive.exec.pre.hooks": "",
      "hive.exec.post.hooks": ""
    }
  }
]

This config will override some parameters in hive-site.xml, as well as setting the metastore password to ‘p’. Note that you can specify more configuration files to override (core-site.xml, hdfs-site.xml, etc.) but each file must be it’s own object in the configurations array, similar to how global and hive-site are handled above.

Once you’ve specified the host groups and any configuration overrides, the Blueprint also needs a stack – the versions of software to install. Right now Ambari only supports HDP – see this table for the stack versions supported in each Ambari release. As a weird constraint, the blueprint name is inside the blueprint itself, along with the stack information. This name must be the same as the name you provide to the REST endpoint, for some reason. To upload a new blueprint to an Ambari server you can use:

$ curl -X POST -H 'X-Requested-By: Pythian' <ambari-host>/api/v1/blueprints/<blueprint name> -d @<blueprint file>

The X-Requested-By header is required, and as noted the blueprint name must match the file.

You can see the entire blueprint file from this example here, feel free to use it as a baseline for your cluster.

Creating the Cluster

Once you’ve written a blueprint with the services and configuration you want, you need to:

  • Create EC2 instances with the correct security groups
  • Install ambari-master on one, and ambari-agent on the others
  • Configure the agents to report to the master
  • Write a file mapping hosts to host groups
  • Push both files (the blueprint and the mapping) to the REST API

Fortunately, we have a Python script that can do that for you! This script will create a benchmarking cluster with a specific number of data nodes, an Ambari master and a separate Hadoop master. It can easily be modified to create multiple classes of machines, if you want to have more host groups than “master” and “slave”. The core of the script (the EC2 interaction and Ambari RPM installation) is stolen from based on work by Ahir Reddy from Databricks, with the Ambari Blueprints support added by yours truly.

If you’re curious about the host mapping file: it has the blueprint name, and an array of host names for every host_group. Corresponding to the example above, the cluster definition would be:

{
  "blueprint":"hadoop-benchmark",
  "host_groups: [
    { 
      "name":"master",
      "hosts":[{"fqdn":"host-1"}]
    },
    {
      "name":"slaves",
      "hosts":[ 
        {"fqdn":"host-2"},
        {"fqdn":"host-3"}  
      ]
  ]
}

You could replace “host-n” with the real domain names for your Amazon instances (use the internal ones!), and create a new cluster over those machines using:

$ curl -X POST -H 'X-Requested-By: Pythian' <ambari-host>/api/v1/clusters/<cluster name> -d @<mapping file>
Conclusion

Ambari Blueprints have some rough edges right now, but they provide a convenient way to deploy all of the services supported by the HDP stack. Watch this space for more posts about my effort to create a repeatable, one-touch, cross-distribution Hadoop SQL benchmark on EC2.

Categories: DBA Blogs

Resolving Problems with the Embedded WebLogic in JDeveloper on Mac

Shay Shmeltzer - Fri, 2014-06-20 10:03

Just a quick entry about something that I ran into in the past with JDeveloper 11.1.2.4, and that some of you who are using Mac might run into.

When you try and run your web application and the embedded WebLogic tries to start you might run into an error like:

Unrecognized option: -jrockit
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit. 

This is most likely due to the fact that JDeveloper is trying to use the wrong JVM to run your WebLogic.

To solve this - go into the system11.1.2.4.39.64.36.1/DefaultDomain/bin directory and locate the setDefaultDomain.sh file.

Edit this file and add the following lines:

JAVA_VENDOR="Sun"

export JAVA_VENDOR 

By doing this you'll instruct WebLogic to start with a regular JVM and not the JRockit variant which isn't on your mac. 

Categories: Development

Log Buffer #376, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-06-20 08:52

This Log Buffer Editions collects and present you various blog posts from Oracle, SQL Server and MySQL arena.

Oracle:

Oracle SOA Suite Achieves Significant Customer Adoption and Industry Recognition

The Perks of Integrated Business Planning

Why are we so excited about Oracle Database XE R2?

Skipping ACS ramp-up using a SQL Patch

Moving datafiles has always been a pain.

SQL Server:

Using DBCC DROPCLEANBUFFERS When Testing Performance

Data Mining Introduction Part 9: Microsoft Linear Regression

PowerShell One-Liners: Accessing, Handling and Writing Data

Stairway to SQL Server Security: Level 2, Authentication

Matthew Flatt was trying to measure the performance of a tool interacting with Microsoft Azure

MySQL:

Using UDFs for geo-distance search in MySQL

Amidst various blog postings on Docker, a security issue announced yesterday that detailed an exploit of Docker that makes it possible to do container breakout.

Discover the MySQL Central @ OpenWorld Content Catalog

Shinguz: Replication channel fail-over with Galera Cluster for MySQL

SQL queries preview before sent to server in MySQL for Excel

Categories: DBA Blogs

ADF BC Reserved Mode for DB Connection Critical Applications

Andrejus Baranovski - Fri, 2014-06-20 06:31
With this post I would like to explain how you can control ADF BC mode and switch from default managed to reserved mode and back. With default managed mode, there is no guarantee in ADF BC for DB connection to stay assigned for the user session. Why it is highly recommended to design your ADF application in such way to be compatible with default managed mode, there are exceptional scenarios when we want to guarantee DB connection to stay the same between requests. Reserved mode allows to guarantee DB connection and there is a way to switch between modes on runtime. You can read more in ADF developer guide - 40.4 Setting the Application Module Release Level at Runtime.

Sample application - ReservedModeADFApp.zip, is developed to demonstrate how you can switch between managed and reserved modes. You can change any field and click Prepare Data, this will post it to DB. By default, data will be lost during the next request. However, if you click Enable Reserved Mode - ADF BC will switch to reserved mode and DB connection will stay assigned, until you will switch back to managed mode again:


Prepare Data button calls ADF BC post operation, to post data to DB. There are two key methods here - enableReservedMode and disableReservedMode. As you can see, it is pretty straightforward to switch between modes, you only need to call setReleaseLevel method available as part of ADF BC API:


Application Module configuration is set with all default settings, also DB pooling is enabled:


We should test how it works to switch between two different modes - managed and reserved. Let's change data in any field, for example Hire Date:


Click Prepare Data button to post changes for Hire Date to DB:


Click Save button to commit transaction:


At last, click Cancel button to check if previously posted changes were committed to DB:


You can see - changes in Hire Date are reverted back, as change was not saved after it was posted. Application Module is set with DB pooling, means during each request it will return DB connection back to the pool and get another one on the next request. This would mean all changes posted to DB are lost:


This is correct behaviour. But in some exception cases, we would like to keep DB connection between requests. We could force this by switching Application Module into reserved mode. By default, when DB pooling is enabled, you can see there are zero DB connections used (connections are returned quickly to the pool after request completion):


We should repeat our test, but little different now. Before posting changes to DB, click Enable Reserved Mode button - this will switch Application Module to reserved mode and guarantee DB connection across requests:


You can see from the DB connection monitoring, one connection gets reserved now and stays assigned:


Click button Prepare Data now, to post changes to DB when reserved mode is on:


Click Save button to commit transaction:


Click Cancel button to see if recent changes will be reverted:


Changes are not lost this time, means posted data was successfully committed in the next request, when Save button was pressed:


Changes were posted and committed to DB, we can switch back to default managed mode. Click Disable Reserved Mode button:


You should see from DB connection monitoring graph, previously reserved DB connection should be returned back to the pool:

Fun : Suivez moi !

Jean-Philippe Pinte - Fri, 2014-06-20 06:12
Suivez moi sur http://www.flightradar24.com  (Vol Air France AF84)

Rittman Mead at ODTUG KScope’14, Seattle

Rittman Mead Consulting - Fri, 2014-06-20 03:42

NewImage

Next week is ODTUG KScope’14 in Seattle, USA, and Rittman Mead will be delivering a number of presentations over the week. Coming over from Europe we have Jérôme Françoisse and myself, and we’ll be joined by Michael Rainey, Daniel Adams, Charles Elliott from Rittman Mead in the US. We’re also very pleased to be co-presenting with Nicholas Hurt from IFPI who some of you might know from the Brighton BI Forum this year, who’s talking with Michael Rainey about their Oracle data warehouse upgrade story.

Here’s details of the Rittman Mead sessions and speakers during the week:

  • Daniel Adams : “Hands-On Training: OBIEE Data Visualization: The “How” and the “Why”?” – Monday June 23rd 2014 1.15pm – 3.30pm, Room 2A/2B
  • Jérôme Françoisse : “Oracle Data Integrator 12c New features” – Monday June 23rd 1.15pm – 2.15pm, Room 616/615
  • Mark Rittman : “Deploying OBIEE in the Cloud: Options and Deployment Scenarios” – Monday June 23rd 3.45pm – 4.45pm, Room 615/616
  • Mark Rittman : “OBIEE, Hadoop and Big Data Analysis” – Tuesday June 24th 11.15am – 12.15pm, Room 602/603/604
  • Michael Rainey and Nicholas Hurt (IFPI) : “Real-Time Data Warehouse Upgrade: Success Stories” – Tuesday June 24th 2014 2pm – 3pm, Room 614
  • Charles Elliott : “OBIEE and Essbase – The Circle of Life” – Wednesday June 25th 11.15am – 12.15pm
  • Mark Rittman : “TimesTen as your OBIEE Analyic “Sandbox” – Wednesday June 25th 3.15pm – 4.15pm, Room 615/616

We’ll also be around the event and Seattle all week, so if you’ve got any questions you’d like answered, or would like to talk to us about how we could help you with an Oracle BI, data integration, data warehousing or big data implementation, stop one of us for a chat or drop me a line at mark.rittman@rittmanmead.com.

Categories: BI & Warehousing

PeopleSoft 9.3 – A clarification

Duncan Davies - Fri, 2014-06-20 02:49

After the release and subsequent removal of the ‘there is no PeopleSoft 9.3′ post on the My Oracle Support site and twitter I’ve been in contact with Oracle to find out the truth behind these rumours. Let me share with you what I have learned directly from Oracle…

Oracle believes PeopleSoft customers want new capabilities delivered at a high frequency that they can adopt in a non-disruptive, low cost way. They understand that the model of big, disruptive, high cost upgrades every several years doesn’t work any more.

The new PeopleSoft delivery model enables Oracle to give PeopleSoft customers what they want – new capabilities delivered at a high frequency that they can adopt in a non-disruptive, low cost way. PeopleSoft Update Manager (PUM), delivered with PeopleSoft 9.2 is the technology that enables this revolutionary new delivery model.

What does this mean for a PeopleSoft 9.3 release? PeopleSoft customers would like to avoid major upgrades. Oracle’s focus now and for the foreseeable future is to deliver new innovative capabilities onto PeopleSoft 9.2 and (if possible) avoid delivering a PeopleSoft 9.3 release and thus pushing customers towards a major upgrade. The level of Oracle’s commitment to PeopleSoft and the investment in new features, functions, and capabilities remains the same regardless of the delivery mechanism (i.e. delivery onto PeopleSoft 9.2 or in a new PeopleSoft 9.3 release).

Learn more for yourself; check out the PeopleSoft Talk featuring Paco Aubrejuan discussing the new PeopleSoft delivery model and its impact on a PeopleSoft 9.3 release: http://youtu.be/Hm4UWtooG0I


Application Management Pack for Utilities Self Running Demonstration

Anthony Shorten - Thu, 2014-06-19 20:32

A self running demonstration of the Application Management Pack for Oracle Utilities is now available from My Oracle Support at Doc Id: 1474435.1. The demonstration in SWF (Flash) format, covers the features of the pack available for Oracle Enterprise Manager which is annotated for ease of use.

The demonstration covers the following topics:

  • Discovery of the Oracle Utilities Targets
  • Registration of Oracle Utilities Targets
  • Starting and Stopping Oracle Utilities Targets
  • Patching Oracle Utilities Targets
  • Migrating patches across Oracle Utilities Targets
  • Cloning Environments including basic and advanced cloning
  • Miscellaneous functions

The demonstration requires a browser with Adobe Flash installed. The demonstrate can be streamed from My Oracle Support or downloaded for offline replay.


Move That Datafile!

alt.oracle - Thu, 2014-06-19 14:56
Moving datafiles has always been a pain.  There are several steps, it’s fairly easy to make a mistake and it requires the datafile to be offline.  There are also different steps depending on whether the database is in ARCHIVELOG mode or not.  In ARCHIVELOG mode, the steps are…
1)      Take the tablespace containing the datafile offline2)      Copy/rename the datafile at the OS layer3)      Use ALTER TABLESPACE…RENAME DATAFILE to rename the datafile so that the controlfile will be aware of it4)      Backup the database for recovery purposes (recommended)
If the database is in NOARCHIVELOG mode, you have to shutdown the DB, put it in the MOUNT state, etc, etc.  That’s certainly not that hard to do, but you get the feeling that there should be a better way.  Now in Oracle 12c, there is – using the ALTER DATABASE MOVE DATAFILE command.  With this command, you can move a datafile, while it’s online, in one simple step.  Let’s set this up.
SQL> create tablespace test datafile '/oracle/base/oradata/TEST1/datafile/test01.dbf' size 10m;
Tablespace created.
SQL> create table altdotoracle.tab1 (col1 number) tablespace test;
Table created.
SQL> insert into altdotoracle.tab1 values (1);
1 row created.
SQL> commit;
Commit complete.
Let’s go the extra mile and lock the table in that datafile in another session.
SQL> lock table altdotoracle.tab1 in exclusive mode;
Table(s) Locked.
Now let’s use the command.
SQL> alter database move datafile '/oracle/base/oradata/TEST1/datafile/test01.dbf'  2   to '/oracle/base/oradata/TEST1/datafile/newtest01.dbf';
Database altered.
That’s all there is to it.  Datafile moved/renamed in one step while a table it contained was locked.
SQL> select file_name from dba_data_files where file_name like '%newtest%';
FILE_NAME--------------------------------------------------------------------------------/oracle/base/oradata/TEST1/datafile/newtest01.dbf
Categories: DBA Blogs

Oracle Priority Service Infogram for 19-JUN-2014

Oracle Infogram - Thu, 2014-06-19 14:53

APEX
(YABAOAE) Yet Another Blog About Oracle Application Express lets us know that: Oracle Application Express 5.0 Early Adopter 2 now available!
SQL Developer
From that JEFF SMITH: Managing Scripts in Oracle SQL Developer.
OEM
AWR Warehouse in EM12c Rel. 4 from DBA Kevlar.
GoldenGate
Oracle GoldenGate Data Transformation from VitalSoftTech.
Fusion
From Practical experience on Oracle Fusion Middleware: How to Create tree type CVL in Content server(UCM).
BI Analytics
Application Composer: Exposing Your Customizations in BI Analytics and Reporting, from Fusion Applications Developer Relations.
OUM
Oracle Unified Method (OUM) 6.2 Oracle’s Full Lifecycle Method for Deploying Oracle-Based Business Solutions, from SOA & BPM Partner Community Blog.
Big Data
From InformationWeek: 10 Powerful Facts About Big Data.
Big Data Appliance
End-to-End ODI12c ETL on Oracle Big Data Appliance Pt.3 : Enhance with Oracle Reference Data via Sqoop, and CKMs, from RittmanMead.

Security
Reducing the human cost of your security metrics, from TRANSLATING SECURITY VALUE.
Have you looked at Risks Digest lately? Always worth a read. Interesting and sometimes very useful analysis of a wide variety of security issues as well as anything in life that involves risk…which is pretty much everything in life.
OpenStack
Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases, from Ronen Korman's Blog.
EBS
From the Oracle E-Business Suite Support Blog:
Tendering in Oracle Transportation Management
What R12 Diagnostics are Available for Inventory?
Webcast: New Features and Enhancements in Release 12.2 Enterprise Asset Management
New Patch Released for iSupplier Portal/Supplier Lifecycle Management (SLM)
Webcast: OPM/LCM: Key setups & Data Diagnostics
Wave Planning with Oracle Warehouse Management
Webcast: Oracle Payments Funds Disbursement Analyzer
Webcast: Holds Unlocked
Payroll Customers Must Apply Mandatory Patches to Maintain Your Supportability
...and Finally
For those of you following our America's Cup team, this from the Taiwan News: Cup rules allow Oracle to sail with challengers.
Continuing dreams of going into space. Okay, granted, whereever you go, there you are, but still, visiting other stars is our long-term goal: Transcendence Going Interstellar: How the Singularity Might Revolutionize Interstellar Travel, from Centauri Dreams.

Android Update: 4.4.3

Dietrich Schroff - Thu, 2014-06-19 11:36
After nearly everyone upgraded to 4.4.3 my device came up with the icon for upgrading android to its next version:

For a complete history of all updates visit this posting.

Mobile applications can be a boon for businesses

Chris Foot - Thu, 2014-06-19 11:32

As there are thousands of unique businesses active today, each providing specific services or products to consumers, creating mobile applications directly related to their practices seems feasible.

The task is of course easier said than done. Platform-as-a-Service offers organizations the environment in which to create smartphone and tablet tools. However, monitoring such a system will likely require the expertise of database administration services that specialize in cloud deployments.

Why mobile matters
Although having a mobile application won't wholly determine whether a company is successful or not, it wouldn't hurt it to have one. Harvard Business Review referenced a 2012 survey of 1,051 U.S. smartphone users aged 13 to 54 conducted by AOL and advertising agency BBDO. The study discovered that:

  • Nearly half (48 percent) of all consumers spent an average 864 minutes using their smartphones to seek entertainment.
  • Just under 20 percent spent time socializing with other people using the devices.
  • Approximately 12 percent leveraged their machines to find a product or service

Because smartphone purchase rates have been increasingly steadily each year, the manner in which the units are used is becoming more diverse. It can only be expected that people will continue to shop more on their phones, or at least search for items.

Constructing ubiquitous brands
Developing and launching unique mobile applications can help organizations boost their prevalence in the market. According to Natasha Clark, a contributor to BusinessTechnology, around 30,000 such tools are implemented every month, meaning that more competitors are trying to gain stronger favor among consumers.

Where does the market lie?
What kinds of applications a business develops depends on its primary practices and which consumers it's targeting. Companies in the service industry have acquired positive return on investment from the endeavor. Clark referenced a tool created by Eccleston Square Hotel in London, which provides guests with:

  • Room service
  • A map and direction feature
  • A popular attractions section
  • Dining recommendations
  • General hotel information

"Nowadays, people use mobiles more than the website on a desktop," said Eccleston Square Hotel Company Director Olivia Byrne told Clark. "Our app has lots more functions, and the fact that it stays on the phone after checkout is a constant reminder of our hotel."

Providing a solid platform
Depending on how complex and flexible enterprises want their mobile applications to be, it could be in their best interests to seek consultation from DBA services. The environments needed to create modern smartphone tools can be quite complicated, so having a dedicated team monitor them is essential.

The post Mobile applications can be a boon for businesses appeared first on Remote DBA Experts.

Oracle OpenWorld and JavaOne SF 2014 Content Catalogs are Available!

OTN TechBlog - Thu, 2014-06-19 11:31

Session abstracts are now available to view for Oracle OpenWorld and JavaOne San Francisco 2014!  The sessions catalog will help you:

  • Discover content targeted to you by searching on programs, tracks, keywords, and session types
  • Find sessions from your favorite speakers
  • See the full range of products, technologies, and solutions that will be showcased by Oracle experts and partners in the exhibition halls this year

Start planning your Oracle OpenWorld/JavaOne week, and check back often—new content will be added over the following weeks and months.

The OTN team is looking forward to seeing you to Oracle OpenWorld and JavaOne San Francisco 2014!