Feed aggregator

Getting Smarter in Renting with Tableau 10

Rittman Mead Consulting - 10 hours 20 min ago

Not a long time ago a friend of mine spent a significant amount of time trying to find a flat to rent. And according to what he said it wasn't an easy task. It took him a decent time and efforts to find something that is big enough (but not too big) not too far from a workplace, had required features and affordable at the same time. And as a specialist in data analysis, I prefer to think about this task as a data discovery one (yes, when you have a hammer everything looks like a nail). And I decided to see if a data analysis tool can help me understand the rental market better. I'm sure you've already read the name of this post so I can't pretend I'm keeping intrigue. This tool is Tableau 10.3.

The Data

The friend I was talking before was looking for a flat in Moscow, but I think that this market is completely unknown to the most of the readers. And also I'd have to spend a half of time translating everything into English so for this exercise I took Brighton and Hove data from http://rightmove.co.uk and got a nice JSON Lines file. JSON Lines files are basically the same JSON as we all know but every file has multiple JSONs delimited by a newline.

{json line #1}
{json line #2}
{json line #n}

That could be a real problem but luckily Tableau introduced JSON support in Tableau 10.1 and that means I don't have to transform my data to a set of flat tables. Thanks to Tableau developers we may simply open JSON Lines files without any transformations.

Typical property description looks like this:


It has a few major blocks:

  • Property name - 2 bedroom apartment to rent
  • Monthly price - £1,250
  • Description tab:
    • Letting information - this part is more or less standard and has only a small number of possible values. This part has Property name: Property value structure ('Date available':'Now').
    • Key features - this part is an unformalized set of features. Every property may have its own unique features. And it is not a key-value list like Letting information, but a simple list of features.
    • Full description - simply a block of unstructured text.
  • Nearest stations - shows three nearest train stations (there could be underground stations too if they had it in Brighton).
  • School checker - this shows 10 closest primary and 10 secondary schools. For this, I found a kind of API which brought me a detailed description of every school.

And finally, the JSON for one property has the following look. In reality, it is one line but just to make it more easy to read I formatted it to a human readable format. And also I deleted most of the schools' info as it is not as important as it is huge.

Property JSON

   "property_price_week":"£254 pw",
         "station_dist":"(0.4 mi)"
         "station_dist":"(1.2 mi)"
         "station_dist":"(1.7 mi)"
   "letting_type":"Long term",
            "distance":"0.6 miles",
            "name":"Bishop Luffa School, Chichester",
   "agent":"On The Move, South",
   "full_description":"<p itemprop=\"description\">We are delighted to bring to market, this fabulous semi detached bungalow ... </p>",
            "distance":"0.3 miles",
            "name":"Fishbourne CofE Primary School",
   "property_address":[ "Mill Close, Chichester, West Sussex, PO19"],
   "property_name":"3 bedroom bungalow to rent",
   "date_added":"08 June 2017 (18 hours ago)",
   "property_price_month":"£1,100 pcm",

The full version is here: 6391 lines, I warned you. My dataset is relatively small and has 1114 of such records 117 MB in total.

Just a few things I'd like to highlight. Letting information has only a small number of fixed unique options. I managed to parse them to fields like furnish, letting_type, etc. Key Features list became just an array. We have thousands of various features here and I can't put them to separate fields. Nearest stations list became an array of name and value pairs. My first version of the scrapper put them to a key-value list. Like this:

      "Fishbourne": "(0.4 mi)",
      "Chichester": "(1.2 mi)",
      "Bosham": "(1.7 mi)"

but this didn't work as intended. I got around one hundred of measures with names Fishbourne, Chichester, Bosham, etc. Not what I need. But that could work well if I had only a small number of important POIs (airports for example) and wanted to know distances to this points. So I changed it to this and it worked well:

         "station_dist":"(0.4 mi)"
         "station_dist":"(1.2 mi)"
         "station_dist":"(1.7 mi)"
Connect to the Data

When I started this study my knowledge of the UK property rent market was close to this:


And it's possible or even likely that some of my conclusions may be obvious for anyone who is deep in the topic. In this blog, I show how a complete newbie (me) can use Tableau and become less ignorant.

So my very first task was to understand what kind of objects are available for rent, what are their prices and so on. That is the typical task for any new subject area.

As I said before Tableau 10 can work with JSON files natively but the question was if it could work with such a complex JSON as I had. I started a new project and opened my JSON file.


I expected that I will have to somehow simplify it. But in reality after a few seconds of waiting Tableau displayed a full structure of my JSON and all I had to do was selecting branches I need.


After a few more seconds I got a normal Tableau data source.


And this is how it looked like in analysis mode


First Look at the Data

OK, let's get started. The first question is obvious: "What types of property are available for rent?". Well, it seems that name ('2 bedroom apartment to rent') is what I need. I created a table report for this field.


Well, it gives me the first impression of what objects are offered and what my next step should be. First of all the names are ending with "to rent". This just makes strings longer without adding any value. The word "bedroom" also doesn't look important. Ideally, I'd like to parse these strings into fields one of which is # of bedrooms and the second one is Property type. The most obvious action is to try Split function.


Well, it partially worked. This function is smart enough and removed 'to rent' part. But except for this, it gave me nothing. On other datasets (other cities) it gave me much better results but it still wasn't able to read my mind and did what I wanted:


But I spent 15 seconds for this and lost nothing and if it worked I'd saved a lot of time. Anyway, I'm too old to believe in magic and this almost didn't hurt my feelings.

Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.

Yes, this string literally asks some regular expressions wizardry.


I can easily use REGEXP_EXTRACT_NTH and get what I want. Group 1 is the number of bedrooms and Group 3 is the property type. Groups 2 and 4 are just constant words.


Explanation for my regular expressionI can describe most of the names in the following way: "digit bedroom property type to rent" and the rest are "property type to rent. So digit and bedroom are optional and property type to rent are mandatory. The expression is easy and obvious: ([0-9]*)( bedroom )*(.*)( to rent)

Regular expressions are one of my favourite hammers and helped me a lot for this analysis. And after all manipulations, I got a much better view of the data (I skipped some obvious steps like create a crosstab or a count distinct measure to save space for anything more interesting).


And while this result looks pretty simple it gives me the first insight I can't get simply browsing the site. The most offered are 1 and 2 bedroom properties especially flats and apartments. And if a family needs a bigger something with 4 or 5 bedrooms, well I wish them good luck, not many offers to chose from. Also if we talk about living property only we should filter out things like GARAGE, PARKING or LAND.

120 130

I think both charts work pretty well. The first one presents a nice view of how flats and apartments outnumber all other types and the second one gives a much better understanding of how many of 2 bedroom properties offered compared to all others.

And while I'm not a big fan of fancy visualisations but if you need something less formal and more eye-catching try Bubbles chart. It's not something I'd recommend for an analysis but may work well for a presentation. Every bubble represents particular property type, colour shows a number of bedrooms and size shows the number of properties.


Going Deeper

The next obvious question is the price. How much do different properties cost? Is any particular one more expensive than average or less? What influences the price?

As a baseline, I'd like to know what is the average property price. And I obviously don't want just one figure for the city-wide price. It's meaningless. Let's start with a bar chart and see what is the range of prices.


Well, we have a lot of options. Flat share costs less than £700 or we may choose a barn for more than £3600. Again a very simple result but I can't get it directly from the site.

The next obvious question is how the number of bedrooms affects the price. Does the price skyrockets with every additional bedroom or maybe more bedrooms mean smaller rooms and price increases not too fast?


Well, this chart gives me the answer but it looks bad. Mostly because a lot of properties types don't have enough variance in room number. Studio flats have only one bedroom by definition and the only converted barn has 7 bedrooms. I'd like to remove types which don't have at least 3 options and see how the price changes. For this, I created a new computed field using fixed keyword. It counts the number of bedroom options by property type.


And then I use it in the filter 'Bedroom # variance' at least 3. Now I have a much more clean view. And I can see that typically more bedrooms mean significantly higher price with a few exceptions. But in fact, these are not actual exceptions just a problem of a small dataset. I can say that increase in # bedrooms certainly means a significant increase in price. And one more insight. Going above 7 bedrooms may actually double the price.


Averages are good but they hide important information of how prices are distributed. For example, six properties priced £1K and one £200 give average £885. And looking at average only may make you think that with £900 you may choose one of 7 options. It's very easy to build a chart to check this. Just create a new calculation called Bins and use in a chart.

180 190

With £100 bins I got the following chart. It shows how many properties have price falling to a particular price range. For example, the £1000 bin shows # of properties with prices £1000-£1100.


The distribution looks more or less as expected but the most interesting here is that £1000-£1100 interval seems to be very unpopular. Why? Let's add # of bedrooms to this chart.


£1000 is too expensive for 1 bedroom and studios but too cheap for two. Simple. What else can we do here before moving further? Converting this chart to a running total gives a cool view.


What can this chart tell us? For example, if we look at the orange line (2 bedrooms) we will find that with £1200 we may choose among 277 of 624 properties. With £1400 budget we have 486 of 624. Further £200 increase in budget won't significantly increase the number of possibilities and if the change from £1200 to £1400 almost doubled the number of possibilities, the next £200 give only 63 new options. I don't have a ready to use insight here, but I got a way to estimate a budget for a particular type of property. With budget £X I will be able to choose one of N properties.

Why It Costs What It Costs

OK, now I know a lot of statistics about prices. And my next question is about factors affecting the price. I'd like to understand does a particular property worth what it cost or not. Of course, I won't be able to determine exact price but even hints may be useful.

The first hypothesis I want to check is if a train station near raises the price or it isn't any important. I made a chart very similar to the previous one and it seems that Pareto principle works perfectly here. 80% or properties are closer than 20% of the maximum distance to a station.


But this chart doesn't say anything about the price it just gives me the understanding of how dense train stations are placed. I'd say that most of the properties have a station in 10-15 minutes of walking reach and therefore this should not significantly affect the price. My next chart is a scatter plot for price and distance. Every point is a property and its coordinates on the plot determined by its price and distance to the nearest station. Colour shows # of bedrooms.


I'd say that this chart shows no clear correlation between price and distance. And a more classical line chart shows that.


The maximum price slightly decreases with distance, minimum price on the contrary increases. Average price more or less constant. I think the hypothesis is busted. There is no clear correlation between the distance a tenant have to walk to a station and the price he has to pay. If you want to rent something and the landlord says that the price is high because of a train station near, tell him that there are stations all around and he should find something more interesting.

What about furnishings? Does it cheaper to get an unfurnished property or a landlord will be happy to meet someone who shares his taste?


Unfurnished property is definitely cheaper. And it's interesting that in some cases partly furnished even cheaper than completely unfurnished. But at least for furnished/unfurnished, we can see a clear correlation. When you see a furnished one for the price of unfurnished this may be a good pennyworth.

Another thing I'd like to check. Can we expect I lower price for a property not available immediately? Or is, on the contrary, the best price is offered for already unoccupied properties?

As always start with a general picture. What is the average time of availability by property types?


For most popular types it is about one month and if you have a house you typically publish it two or three months in advance. And what is about the price? One more chart that I like in Tableau. In the nutshell, it is a normal line chart showing an average price by days before property availability. But the thickness of lines shows the number of properties at the same time. So I can see not only the price but reliance too. A thick line means it was formed by many properties and a thin line may be formed by few properties and move up or down significantly then something changes. It would be very interesting to get a historical data and see how much time properties stay free or how long it takes before the price is reduced, but unfortunately, I don't have this data.


And looking at this chart I'd say that there is no statistically significant dependency for price and availability date. Renting a property available in the distant future won't save you money* (*=statistically).

And the last thing I'd like to investigate is the Key features. What do landlords put as the key features of their properties? How do they affect the price?

The list of popular Key features surprised me.


'Unfurnished' looks good to me, it is a really significant part of the deal. But 'Brighton'? For properties in Brighton? '1 Bedroom'. How many bedrooms can '1 bedroom flat to rent' have? Oh, there is a key feature saying '1 bedroom' now I know. But jokes aside. I had to make a lot of cleaning on this data before I could use it. There are six ways to write 'Modern kitchen'. Make everything upper case, then remove quotes, strip spaces and tabs, remove noisy features like 'stylish 1 bedroom apartment' and so on. After this, I got a slightly better list with approximately 3500 features instead of 4500. Note how all variants of writing 'GAS CENTRAL HEATING' now combined into one most popular feature. But there are still too many features. I'm sure that there should be not more than a hundred of them. Even at this screenshot you may see 'Unfurnished' and 'Unfurnished property' features.


When I need a visualisation for this amount of points, bar charts or tables won't play well. My weapon of choice is Scatter plot. Every point is a particular feature, axes are minimum and average prices of it, size is determined by the number of properties declaring to have this feature and the colour is the maximum price. So if a feature is located high on the plot it means that in average it will be expensive to have it. If this feature at the same time located close to the left side even cheap properties may have it. For example, if you want a swimming pool be ready to pay at least £3000 and £7000 in average. And the minimum price for tumble dryer is £3250 but average £3965. The cheapest property with a dryer is more expensive than with a pool, but in average pools are more expensive. That is how this chart works.


The problems of this chart are obvious. It is littered with unique features. Only one property has 4 acres (the point in top right corner). And actually not so many swimming pools are available for rent in Brighton. I filtered it by "# of properties > 25" and here is how prices for the most popular features are distributed.


Central location will cost you at least £100 and £1195 in average and for Great location be ready to pay at least £445 and £1013 in average. Great location seems to be less valuable than the central one.

And now I can see how a particular feature impacts prices. For example 'GAS HEATING'. I made a set with all variants of heating I could find ('GAS CENTRAL HEATING', 'GAS HEAT' and so on). Now I can analyse how this feature impacts properties. And here is how it impacts the price of flats. Blue circles are properties with gas heating and orange are without.


Very interesting in my opinion. The minimum price of properties with gas heating (blue circles) is higher than without. That is expected. But average price for properties without gas heating is higher.

And here are kitchen appliances. For 1 bedroom flats, they increase both minimum and average prices significantly. But for bigger flats minimum price with appliances is higher and average price is lower. Possible this option is important for relatively cheap properties, but its weight is not that big for the bigger ones.




Categories: BI & Warehousing


Tom Kyte - Wed, 2017-06-21 22:26
Below is the the employee table, EID NAME SALARY COMM_PCT 100 Steven 24000 101 Neena 17000 0.6 102 Lex 17000 0.8 145 John 14000 0.4 If I do the below select to employee table I get the output as 3. SELECT DENSE_RANK(17000,0.6) W...
Categories: DBA Blogs

SQL execution plan - access predicates determination

Tom Kyte - Wed, 2017-06-21 22:26
Hello Tom, I know Tom has retired. But glad that a group of Oracle expertise help to keep operation of AskTom. It is indeed very helpful. I was facing a SQL plan issue in an SAP/Oracle Environment. The SQL is: <code> SELECT DISTINCT "M...
Categories: DBA Blogs

The VERSION column - A unsung treasure

Anthony Shorten - Wed, 2017-06-21 20:58

If you use an Oracle Utilities Application Framework based product you will notice the column VERSION exists on all objects in the product. There is a very important reason that this column exists on the tables.

One of the common scenarios in an online system is the problem called the lost update problem. Let me explain, say we have two users (there can be more), say User A and User B.

  • User A reads Object A to edit it.
  • User B reads Object A as well to edit it at the same time.
  • User B saves the Object changes first.
  • User A saves the Object changes.

Now, without protection, the changes that User B made would be overridden by User A's changes. We have lost User B's changes. This is the lost update problem in a nutshell.

Now using the VERSION column changes the above scenario:

  • When User A and User B reads the object, the current value of VERSION is noted.
  • Whenever the object is updated, the value VERSION is checked. If it is the same than the value of VERSION when the record was read then value of VERSION is incremented as part of the update.
  • If the value of VERSION does not match, the product will issue a "Concurrency Error" and ask the user to retry the transaction (after reloading the changed object).

In our scenario, User A would receive the message as the value of VERSION has incremented, and therefore differs, since it was read by that user.

VERSION is a standard column on all objects in the system and applies no matter what channel (online, web services or batch) updates the object.

EM13cR2 Installation Failing on BI Publisher Configuration

Pythian Group - Wed, 2017-06-21 16:07

This is going to be a short post as there wasn’t much information on this type of failure when searching for it. Also, the IP’s are changed due to security reasons

I was working on an Oracle EM13cR2 installation and when the installation reached 78%, it failed on the BI Publisher configuration. First, I looked at the CfmLogger log.

INFO: oracle.sysman.top.oms:WLST offline debugging is in the file: /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170613152059.log.wlst_20170613152122.trc
INFO: oracle.sysman.top.oms:Securing BI Publisher to work with Enterprise Manager
INFO: oracle.sysman.top.oms:Locking Enterprise Manager ...
INFO: oracle.sysman.top.oms:OMS Console is locked. Access the console over HTTPS ports.
INFO: oracle.sysman.top.oms:BI Publisher is locked. Access BI Publisher over HTTPS ports.
INFO: oracle.sysman.top.oms:Restart OMS.
INFO: oracle.sysman.top.oms:Restarting Enterprise Manager
INFO: oracle.sysman.top.oms:Stopping Enterprise Manager, this can take some time ...
INFO: oracle.sysman.top.oms:Starting BI Publisher ...
INFO: oracle.sysman.top.oms:Failed to start Enterprise Manager. Diagnostic code 1.
INFO: oracle.sysman.top.oms:See log at /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170613152059.log for details.
INFO: oracle.sysman.top.oms:Error extending domain
INFO: oracle.sysman.top.oms:See log at /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170613152059.log for details.
INFO: oracle.sysman.top.oms:java.lang.Exception: See log at /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170613152059.log for details.
INFO: oracle.sysman.top.oms: at oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA.extendDomainWithBIP(BIPCA.java:3401)
INFO: oracle.sysman.top.oms: at oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA.main(BIPCA.java:1973)

When this led me to the bipca.log , it referenced the details below (which really didn’t tell me much):

[2017-05-09T21:55:17.585-07:00] [sysman] [ERROR] [] [oracle.sysman.bipca] [host: OracleEntMgr] [nwaddr:] [tid: 1] [userId: oracle] [ecid: a4412d2a-def5-49d8-ac27-1046c7243de9-00000001,0] [SRC_CLASS: oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA] [SRC_METHOD: handleFatalError] Error extending domain
[2017-05-09T21:55:17.586-07:00] [sysman] [ERROR] [] [oracle.sysman.bipca] [host: OracleEntMgr] [nwaddr:] [tid: 1] [userId: oracle] [ecid: a4412d2a-def5-49d8-ac27-1046c7243de9-00000001,0] [SRC_CLASS: oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA] [SRC_METHOD: handleFatalError] Fatal error:[[
java.lang.Exception: See log at /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170509214828.log for details.
at oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA.extendDomainWithBIP(BIPCA.java:3401)
at oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA.main(BIPCA.java:1973)

A review of the bipca trace file didn’t tell me much about why the configuration was failing:

2017-05-24 09:26:51,250 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Selected DB vendor: Oracle
2017-05-24 09:26:51,250 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - adding normal datasource: emgc-mgmt_view-pool
2017-05-24 09:26:51,250 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - datasource: emgc-mgmt_view-pool component name: null
2017-05-24 09:26:51,251 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Found normal datasource: emgc-sysman-pool
2017-05-24 09:26:51,253 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Decomposing url: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))(CONNECT_DATA=(SID=emrep))) for driver: oracle.jdbc.OracleDriver
2017-05-24 09:26:51,253 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - There is an unsupported jdbc URL or JDBC driver is detected. See Stack trace:
at com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper.decomposeURL(DatasourceXBeanAspectHelper.java:316)
at java.lang.reflect.Method.invoke(Method.java:606)
at weblogic.WLST.main(WLST.java:29)
2017-05-24 09:26:51,253 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Found normal datasource: jvmd-ds
2017-05-24 09:26:51,255 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Decomposing url: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))(CONNECT_DATA=(SID=emrep))) for driver: oracle.jdbc.OracleDriver
2017-05-24 09:26:51,255 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - There is an unsupported jdbc URL or JDBC driver is detected. See Stack trace:
at com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper.decomposeURL(DatasourceXBeanAspectHelper.java:316)
at org.python.pycode._pyx95.updateDomain$21(/tmp/WLSTOfflineIni822569357732826272.py:103)
at org.python.pycode._pyx95.call_function(/tmp/WLSTOfflineIni822569357732826272.py)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyFunction.__call__(Unknown Source)

After reading for a while and working back and forth with Oracle, we identified the problem. The host wasn’t resolving via nslookup. Though we were able to ping it, what nslookup does is query a DNS domain nameserver to lookup and find IP address and it , so what we had to do was fix this :

[oracle@dbhost ~]# nslookup dbhost

Name: dbhost

Once we fixed this, we were able to install EM13cR2 successfully.

Note:This was originally published on rene-ace.com

Categories: DBA Blogs

(bash): The most useless commands (5)

Dietrich Schroff - Wed, 2017-06-21 15:17
In a comment Albert Godfrind pointed out, that the commands i was talking about like
  1. rev
  2. sl
  3. cowsay
are not really bash commands (in a way that they will be installed within the installation of the bash shell). They are commands, which can be used from on every shell. So my headline is a little bit missleading - thanks for this notice.

Just for clarification here a list with all bash builtin commands:
  • alias
  • bind
  • builtin
  • caller
  • command
  • declare
  • echo
  • enable
  • help
  • let
  • local
  • logout
  • mapfile
  • printf
  • read
  • readarray
  • source
  • type
  • typeset
  • ulimit
  • unalias
and the bourne shell builtins:

  • break
  • cd
  • continue
  • eval
  • exec
  • export
  • getopts
  • hash
  • pwd
  • readonly
  • return
  • shift
  • test
  • times
  • trap
  • umask
  • unset
Still missing:

Q4 FY17 GAAP EPS UP 15% TO $0.76 and NON-GAAP EPS UP 10% TO $0.89

Oracle Press Releases - Wed, 2017-06-21 15:00
Press Release
Q4 FY17 GAAP EPS UP 15% TO $0.76 and NON-GAAP EPS UP 10% TO $0.89 Total Cloud Revenues Up 58% to $1.4 Billion, Total Revenue Up 3% to $10.9 Billion

Redwood Shores, Calif.—Jun 21, 2017

Oracle Corporation (NYSE: ORCL) today announced fiscal 2017 Q4 results and fiscal 2017 full year results. Comparing Q4 2017 to Q4 last year, SaaS (Software as a Service) cloud revenues were up 67% to $964 million, and non-GAAP SaaS revenues were up 75% to $1.0 billion. Cloud PaaS (Platform as a Service) plus IaaS (Infrastructure as a Service) revenues were up 40% to $397 million, and non-GAAP PaaS plus IaaS revenues were up 42% to $403 million. Total cloud revenues were up 58% to $1.4 billion, and non-GAAP total cloud revenues were up 64% to $1.4 billion. Cloud plus on-premise software revenues were up 5% to $8.9 billion, and non-GAAP cloud and on-premise software revenues were up 6% to $8.9 billion. Total revenues were up 3% to $10.9 billion.

Operating Income was up 3% to $4.1 billion, and the operating margin was 37%. Non-GAAP Operating Income was up 5% to $5.0 billion, and the non-GAAP operating margin was 46%. Net Income was $3.2 billion, and non-GAAP Net Income was $3.8 billion. Earnings Per Share was $0.76, while non-GAAP Earnings Per Share was $0.89. Without the impact of the U.S. dollar strengthening compared to foreign currencies, Oracle’s reported GAAP Earnings Per Share would have been 2 cents higher, and non-GAAP Earnings Per Share would have been 1 cent higher.

Short-term deferred revenues were up 8% to $8.2 billion compared with a year ago. Operating cash flow on a trailing twelve-month basis was up 3% to $14.1 billion.

For fiscal 2017, Cloud SaaS revenues were up 61% to $3.2 billion compared to fiscal 2016. Non-GAAP SaaS revenues were up 68% to $3.4 billion. Cloud PaaS and IaaS revenues were up 60% to $1.4 billion for both GAAP and Non-GAAP. Total cloud revenues were up 60% to $4.6 billion. Non-GAAP cloud revenues were up 66% to $4.7 billion. Cloud and on-premise software revenues were up 4% to $30.2 billion. Non-GAAP cloud and on-premise software revenues were up 5% to $30.4 billion. Total Revenues were up 2% to $37.7 billion. Non-GAAP Total Revenues were up 2% to $37.9 billion.

Operating Income was $12.7 billion, and operating margin was 34%. Non-GAAP Operating Income was $16.2 billion and non-GAAP operating margin was 43%. Net Income was $9.3 billion while non-GAAP Net Income was $11.6 billion. Earnings Per Share was $2.21, while Non-GAAP Earnings Per Share was $2.74. Without the impact of the U.S. dollar strengthening compared to foreign currencies, Oracle’s reported GAAP Earnings Per Share would have been 4 cents higher and non-GAAP Earnings Per Share would have been 3 cents higher.

“Our fourth quarter results were very strong as revenue growth and earnings per share both substantially exceeded the high end of guidance,” said Oracle CEO, Safra Catz. “We continue to experience rapid adoption of the Oracle Cloud led by the 75% growth in our SaaS business in Q4. This cloud hyper-growth is expanding our operating margins, and we expect earnings per share growth to accelerate in fiscal 2018.”

“We sold $855 million of new annually recurring cloud revenue (ARR) in Q4, putting us over our $2 billion ARR bookings goal for fiscal year 2017,” said Oracle CEO, Mark Hurd. “We also delivered over $1 billion in quarterly SaaS revenue for the first time. Next year is going to be even better. We expect to sell a lot more than $2 billion in new cloud ARR in fiscal year 2018.”

“AT&T has agreed to migrate thousands of existing Oracle databases containing petabytes of data plus their associated applications workloads to the Oracle Cloud,” said Oracle Chairman and CTO, Larry Ellison. “In the coming year, I expect more of our big customers to migrate their Oracle databases and database applications to the Oracle Cloud. These large-scale migrations will dramatically increase the size of both our PaaS and IaaS cloud businesses.”

The Board of Directors also declared a quarterly cash dividend of $0.19 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 19, 2017, with a payment date of August 2, 2017.

Q4 Fiscal 2017 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast of this event, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q4 results and fiscal 2017 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Pass Code: 34656498.

Contact Info
Ken Bond
Oracle Investor Relations
Deborah Hellinger
Oracle Corporate Communciations
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

"Safe Harbor" Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding our future sales and non-GAAP financial results, expectations of future customer migrations to the Oracle Cloud and growth in the size of our PaaS and IaaS cloud business, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud computing strategy, including our Oracle Cloud SaaS, PaaS, IaaS and data as a service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, or to enhance and improve our products and support services in a timely manner or to position and/or price our products and services to meet market demand, customers may not buy new software licenses, cloud software subscriptions or hardware systems products or purchase or renew support contracts. (3) If the security measures for our products and services are compromised or if our products and services contain significant coding, manufacturing or configuration errors, we may experience reputational harm, legal claims and reduced sales. (4) We may fail to achieve our financial forecasts due to such factors as delays or size reductions in transactions, fewer large transactions in a particular quarter, fluctuations in currency exchange rates, delays in delivery of new products or releases or a decline in our renewal rates for support contracts. (5) Our international sales and operations subject us to additional risks that can adversely affect our operating results, including risks relating to foreign currency gains and losses. (6) Economic, geopolitical and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) We have an active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of June 21, 2017. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

New Window Bookmarklet

Jim Marion - Wed, 2017-06-21 12:05

I am a "New Window" link junkie. I use that link ALL THE TIME! If it were possible to wear it out, mine would be worn out. I wish all PeopleSoft pages had the "New Window" link. For some reason, however, certain developers chose to remove it from specific PeopleSoft pages (such as Structure and Content). I'm sure there is a good reason... there just has to be. So seeing it missing from Fluid has been a significant struggle for me. I'm thankful for Sasank's Fluid UI - New Window Feature - Workaround customization. For quick access to a new window without customization, I have a Bookmarklet, which is a JavaScript fragment masquerading as a favorite (or bookmark). Here is the JavaScript:

(function() {
var parts = window.location.href.match(/(.+?\/ps[pc])\/(.+?)(?:_\d+?)*?\/(.*)/);
window.open(parts[1] + '/' + parts[2] + '_newwin/' + parts[3], '_blank');

To add it to your bookmark toolbar, drag the following link into your link toolbar:

PS New Window

This solution is simple, but may not satisfy your requirements. This bookmarklet assumes you want to open a new window to the URL displayed in the address bar. That URL may or may not match the actual transaction. If you want a bookmarklet that opens a new window specifically targeting the current transaction, then try this bookmarklet:

(function() {
var href = window.location.href;
var parts = (!!frames["TargetContent"] ? !!frames["TargetContent"].strCurrUrl ? frames["TargetContent"].strCurrUrl : href : href).match(/(.+?\/ps[pc])\/(.+?)(?:_\d+?)*?\/(.*)/);
window.open(parts[1] + '/' + parts[2] + '_newwin/' + parts[3], '_blank');

To use it, drag the following link into your bookmark toolbar:

PS New Window

Special shout out to David Wiggins, who posted a similar bookmarklet on my Where is My New Window Link? post as I was writing this blog post.

Oracle Ksplice on Oracle Linux in Bare Metal Cloud

Wim Coekaerts - Wed, 2017-06-21 09:58

One of the great advantages of using Oracle Cloud is the fact that it includes full Oracle Linux support. All the services that you get with Oracle Linux Premier support are included without additional cost when you use Oracle Cloud.

Oracle Ksplice is such a service. (see: http://www.ksplice.com/ ). In order to use Oracle Ksplice outside of Oracle Cloud you configure it at install time when registering your Oracle Linux server with ULN (http://linux.oracle.com ) and you then use the generated access key to configure the uptrack tools.

With Oracle Cloud, both Oracle Public Cloud and Oracle Bare Metal Cloud Services ( http://cloud.oracle.com ), we have made it very easy. Any instance that runs inside our infrastructure has immediate access to the ksplice servers.

For customers or users with existing Oracle Linux instances in BMCS, you have to do a few simple steps to enable Ksplice. We are in the process of adding the uptrack tools to the image by default so, soon, you don't have to do any configuration at all.

Enable Ksplice today:

Log into your Oracle Linux instance as user opc (or as root)

# sudo bash

Download the uptrack client:

# wget -N https://www.ksplice.com/uptrack/install-uptrack

or if you prefer to use curl

# curl -O https://www.ksplice.com/uptrack/install-uptrack

Install the client, make sure you use this exact key, it will only work inside BMCS and is a generic identifier.

# sh install-uptrack dfc21b3ced9af52f6a8760c1b1860f928ba240970a3612bb354c84bb0ce5903e --autoinstall

This command unpacks the downloaded script and install the uptrack utilities (Ksplice client tools). Ignore the connect error, you need the step below.

One more step. In order for the above key to work, you have to point the uptrack tools to a specific update server.

edit /etc/uptrack/uptrack.conf:

# The location of the Uptrack updates repository.


and that's it.

# uptrack-upgrade
Nothing to be done.
Your kernel is fully up to date.
Effective kernel version is 4.1.12-94.3.6.el6uek


For instances that are Bring Your Own we will automate the above steps as well. But at least this gets you going right away.


Storage Server: Firmware

Frank van Bortel - Wed, 2017-06-21 09:42
Firmware The first thing to do, in order to get any software RAID program to run, is to flash the controller out of RAID mode. Only then all of the disks will be seen as just a bunch of disks - nothing else. JBOD that is, for short. The board I have, comes with a LSI SAS2308 controller, to with I want to connect 12 SATA drives using three SAS-to-SATA breakout cables. Drivers There are Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Rittman Mead at Kscope 2017

Rittman Mead Consulting - Wed, 2017-06-21 07:45
Rittman Mead at Kscope 2017

Rittman Mead will be well represented in San Antonio, Texas next week for Kscope 17 with some of our best from both sides of the Atlantic! Our very own Francesco Tisiot and Jordan Meyer will present various topics as well as participate in the conference events. Also, the newly named ODTUG BI Community Lead, Rittman Mead's Becky Wagner, will be on hand and leading a lot of activities throughout. See details below and we hope to see you in Texas.


Oracle Big Data Spatial and Graph enables the analysis of data sets beyond that of standard relational analytics commonly used. Through graph technology relationships can be identified that may not otherwise have been. This has practical uses including in product recommendations, social network analysis, and fraud detection.

In this presentation we will see a practical demonstration of Oracle Big Data Spatial and Graph to load and analyze the "Panama Papers" data set. Graph algorithms will be utilized to identify key actors and organizations within the data, and patterns of relationships shown. This practical example of using the tool will give attendees a clear idea of the functionality of the tool and how it could be used within their own organization.

When: Jun 27, 2017, Tuesday Session 7 , 11:15 am - 12:15 pm
Room: Magnolia


OBIEE 12c is the latest generation of Oracle's Enterprise analytics and reporting tool, bringing with it many powerful new features. Many users are still on earlier releases of OBIEE 11g or even 10g, and are looking to understand how they can move to OBIEE 12c to benefit from its new capabilities.

Liberty Global is a global telecommunications company, with a long history with OBIEE going back to 10g. They wanted to move to OBIEE 12c in order to use the new Advanced Analytics options, and used Rittman Mead to support them with the full scope of the upgrade.

In this presentation, we will see what a highly successful OBIEE 12c migration looks like. We will cover clear details of all the steps required, and discuss some of the problems encountered. Regression testing is a crucial step in any upgrade and we will show how we did this efficiently and accurately with the provided Baseline Validation Tool. This presentation will assist all attendees who are considering, or in the process of, an OBIEE 12c upgrade.

When: Jun 26, 2017, Monday Session 5 , 4:45 pm - 5:45 pm
Room: Wisteria/Sunflower


As a DBA or sysadmin responsible for OBIEE how do you really dig into the guts of OBIEE, look at intra-component communication between the system components and examine the apparently un-examinable? What do you do when you need to trace activity beyond what is in the log files? How do you work with log files in order to give precise but low-level information? What information can be gleaned, by hook or by crook, from OBIEE?

OBIEE provides a set of systems management and diagnostic tools, but these only take you so far. Join me in this presentation to dive deeper with OBIEE. We will take a look at a bag of tricks including undocumented configuration options, flame graphs, system call tracing, discovering undocumented REST APIs, and more! This is not just a geek-out - this is real-life examples of where client OBIEE projects have required that next level of diagnostic techniques and tools. Don your beanie hat and beard as we go deep!

When: Jun 28, 2017, Wednesday Session 12 , 9:45 am - 10:45 am
Room: Wisteria/Sunflower


Becky Wagner is the new ODTUG BI Community Lead. You will find her at:

Monday Community Lunch | 12:45 – 2:00 PM | Grand Oaks K-S

Monday evening BI Community Night | 8:00 - 10:00 PM | Grand Oaks H http://kscope17.com/events/community-nigh-events

She will be doing the 5K Fun Run http://kscope17.com/events/kscope17-5k on Tuesday morning

Women in Technology Lunch | 12:15– 1:45 PM | Cibolo Canyon 6 on Wednesday https://form.jotformpro.com/71134693041955

Navigating the Oracle Business Analytics Frontier Panel
9:00 AM - 11:00 AM, Cibolo Canyon 8/9/10

Categories: BI & Warehousing

Alert message on Listener Log growth

Tom Kyte - Wed, 2017-06-21 04:06
Hi, Need suggestion to get an Alert message over the Listener Log file's growth, now each file gets 10M. We want an alert message over 1GB of the combined listener log files. Regards Binoj
Categories: DBA Blogs

Oracle Linux 7.3 Production Server setup for OEM 13c and repository database.

Tom Kyte - Wed, 2017-06-21 04:06
I am planning to install Oracle Linux 7.3 on VMware. This server will be used to host our oracle enterprise manager 13c and its repository database. I have no experience as a server administrator. Currently I am in a situation were I have to setu...
Categories: DBA Blogs

How do I find out what username the database is passing through when connecting using Windows Authentication?

Tom Kyte - Wed, 2017-06-21 04:06
Hi, I'm having trouble using Windows authentication with my database accounts. I have a front-end application that creates a user account based on what their OS username is, OPS$Username, disregarding the domain name. Logging into the application ...
Categories: DBA Blogs

DBMS_JOB Poll Interval

Tom Kyte - Wed, 2017-06-21 04:06
We recently upgraded databases from to With that upgrade, we noticed that after submitted jobs via dbms_job, the procedures can take anywhere between 5-30 seconds in order to actually start to execute after committing. A sample scr...
Categories: DBA Blogs

How to exchange index partition, if there is such a thing

Tom Kyte - Wed, 2017-06-21 04:06
I have a global partition on an INDEX as below (the table was partitioned on different keys) Fiscal year, period_cd, and 2 more keys as shown <code>CREATE INDEX kr.ufy_idx_px ON kr.LD_LDGR_ENTR_T ( u_fy, u_prd_cd, c_cd, obj_cd) GLOBAL PAR...
Categories: DBA Blogs

Whitepaper: E-Business Suite Upgrade Sizing and Best Practices

Steven Chan - Wed, 2017-06-21 02:00

One of the most-common questions we see revolves around sizing for EBS upgrades -- that is, the amount of resources required when upgrading from one EBS release to another. It is difficult to make general statements about these types of requirements that apply uniformly to all customer environments, since every customer's database size, product usage mix, and concurrent end-user loads vary dramatically. 

That said, our Applications Performance Group has published a whitepaper that describes an EBS 12.1.3 to EBS 12.2 upgrade:

The upgrade scenario covers a 456 GB database prior to the upgrade, and compares the application tier sizes before and after the upgrade.  Individual tablespace requirements are listed in detail.

Related Articles

Categories: APPS Blogs

Where is My New Window Link?

Jim Marion - Tue, 2017-06-20 23:51

As PeopleSoft moves from Classic to Fluid, you have likely noticed the missing New Window link. Why is it missing? I can only speculate. When considering mobile, perhaps it makes sense to drop the New Window link. Mobile devices have limited screen real estate. Why waste it with a link you will likely never use on a mobile device? On a desktop, however, the New Window link is irreplaceable. So... what to do? How can you open a new window? You probably already know if you just open a new window without that special New Window link, your old window session will cease to exist. You know that you will receive the dreaded "... return to most recent active page..." message. Does that mean you can no longer have two active PeopleSoft windows? There is a work around that is documented in various places around the web. In short, the answer is to copy the current URL, open a new tab, paste the URL into the address bar, and then append _newwin to the site name. Before reviewing some examples, let's discuss what is going on and why this is necessary.

The PeopleSoft app server is stateless. App server connections are often pooled and used upon request. However, we know that component buffer state is stored somewhere. If not at the app server, then where? At the web server. As with any J2EE application, PeopleSoft uses web server sessions to store state (which is why load balancers must use sticky sessions, etc). The details here aren't exact, but metaphorical. PeopleSoft partitions web server session state into state blocks. A user may have multiple state blocks. The web server identifies the appropriate state block based on an identifier in the URL. When you click the New Window link, the link's URL pattern instructs the web server to generate a new state block. We can replicate the New Window link behavior by simply modifying a PeopleSoft URL. Let's review an example. Let's say you have a Fluid URL that looks something like this: http://hr.example.com/psc/ps/EMPLOYEE/HRMS/c/EL_EMPLOYEE_FL.HR_EE_ADDR_FL.GBL. The highlighted part between the servlet (psc) and the portal (EMPLOYEE) is the site name (ps). All we have to do is add _newwin to the site name. Accessing a URL such as http://hr.example.com/psc/ps_newwin/EMPLOYEE/HRMS/c/EL_EMPLOYEE_FL.HR_EE_ADDR_FL.GBL will instruct the web server to generate a new state block, perhaps something like http://hr.example.com/psc/ps_23/EMPLOYEE/HRMS/c/EL_EMPLOYEE_FL.HR_EE_ADDR_FL.GBL.

It certainly isn't as simple as the New Window link, but it is better than nothing. For a more permanent, user friendly method, take a look at Sasank's post Fluid UI - New Window Feature - Workaround.

Edit: I created this post because a lot of people I meet aren't familiar with the "New Window" trick. One very important caveat when working with classic pages: The URL in the header may not match the transaction URL. This appears to be tools release dependent. After PeopleSoft implemented "Partial Page Rendering" with a constant header and iframed content, the URL for the content area would change, but the URL in the address bar did not. What this means is simply copying the URL from the address bar and changing it to include '_newwin' will create a new state block, but that new window may point to different component from the original window.

12c MultiTenant Posts -- 4 : Relocate Individual PDB using RMAN

Hemant K Chitale - Tue, 2017-06-20 22:29
Given the current location of a PDB :

SQL> alter session set container=NEWPDB;

Session altered.

SQL> select file_name from dba_data_files;







I can use RMAN to relocate it. First I take an Image Copy Backup to the new location

RMAN> backup as copy pluggable database newpdb format '/u03/oradata/NEWPDB/%U';

Starting backup at 20-JUN-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00017 name=/u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_sysaux_dkj7f8hf_.dbf
output file name=/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-SYSAUX_FNO-17_09s7a8d2 tag=TAG20170620T231338 RECID=4 STAMP=947200428
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:16
channel ORA_DISK_1: starting datafile copy
input datafile file number=00016 name=/u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_system_dkj7f8go_.dbf
output file name=/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-SYSTEM_FNO-16_0as7a8di tag=TAG20170620T231338 RECID=5 STAMP=947200441
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00018 name=/u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_undotbs1_dkj7f8hg_.dbf
output file name=/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-UNDOTBS1_FNO-18_0bs7a8e1 tag=TAG20170620T231338 RECID=6 STAMP=947200451
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00019 name=/u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_mydata_dmbcp0wz_.dbf
output file name=/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-MYDATA_FNO-19_0cs7a8e4 tag=TAG20170620T231338 RECID=7 STAMP=947200454
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 20-JUN-17

Starting Control File and SPFILE Autobackup at 20-JUN-17
piece handle=/u01/app/oracle/fast_recovery_area/orcl12c/ORCL12C/autobackup/2017_06_20/o1_mf_s_947200455_dnms48pp_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 20-JUN-17


Then I switch the database file pointers to the new location.

RMAN> alter pluggable database newpdb close;

Statement processed

RMAN> switch pluggable database newpdb to copy;

datafile 16 switched to datafile copy "/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-SYSTEM_FNO-16_0as7a8di"
datafile 17 switched to datafile copy "/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-SYSAUX_FNO-17_09s7a8d2"
datafile 18 switched to datafile copy "/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-UNDOTBS1_FNO-18_0bs7a8e1"
datafile 19 switched to datafile copy "/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-MYDATA_FNO-19_0cs7a8e4"

RMAN> recover pluggable database newpdb;

Starting recover at 20-JUN-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=42 device type=DISK

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 20-JUN-17

RMAN> alter pluggable database newpdb open;

Statement processed


I can now verify the new location for the database files.

SQL> alter session set container=NEWPDB;

Session altered.

SQL> select file_name from dba_data_files;



The datafiles at the old location still remain and can be manually deleted later as they are no longer part of the database.  Note that those datafiles are still registered by RMAN as COPY

RMAN> list copy of pluggable database newpdb;

using target database control file instead of recovery catalog
List of Datafile Copies

Key File S Completion Time Ckp SCN Ckp Time Sparse
------- ---- - --------------- ---------- --------------- ------
8 16 A 20-JUN-17 2188274 20-JUN-17 NO
Name: /u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_system_dkj7f8go_.dbf
Container ID: 4, PDB Name: NEWPDB

9 17 A 20-JUN-17 2188274 20-JUN-17 NO
Name: /u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_sysaux_dkj7f8hf_.dbf
Container ID: 4, PDB Name: NEWPDB

10 18 A 20-JUN-17 2188274 20-JUN-17 NO
Name: /u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_undotbs1_dkj7f8hg_.dbf
Container ID: 4, PDB Name: NEWPDB

11 19 A 20-JUN-17 2188274 20-JUN-17 NO
Name: /u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_mydata_dmbcp0wz_.dbf
Container ID: 4, PDB Name: NEWPDB


If I manually delete the old location files, I'd need to also delete them from the RMAN Registry.  Alternatively, I can directly delete them from RMAN.

RMAN> delete copy of pluggable database newpdb;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=278 device type=DISK
List of Datafile Copies

Key File S Completion Time Ckp SCN Ckp Time Sparse
------- ---- - --------------- ---------- --------------- ------
8 16 A 20-JUN-17 2188274 20-JUN-17 NO
Name: /u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_system_dkj7f8go_.dbf
Container ID: 4, PDB Name: NEWPDB

9 17 A 20-JUN-17 2188274 20-JUN-17 NO
Name: /u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_sysaux_dkj7f8hf_.dbf
Container ID: 4, PDB Name: NEWPDB

10 18 A 20-JUN-17 2188274 20-JUN-17 NO
Name: /u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_undotbs1_dkj7f8hg_.dbf
Container ID: 4, PDB Name: NEWPDB

11 19 A 20-JUN-17 2188274 20-JUN-17 NO
Name: /u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_mydata_dmbcp0wz_.dbf
Container ID: 4, PDB Name: NEWPDB

Do you really want to delete the above objects (enter YES or NO)? YES
deleted datafile copy
datafile copy file name=/u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_system_dkj7f8go_.dbf RECID=8 STAMP=947200522
deleted datafile copy
datafile copy file name=/u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_sysaux_dkj7f8hf_.dbf RECID=9 STAMP=947200522
deleted datafile copy
datafile copy file name=/u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_undotbs1_dkj7f8hg_.dbf RECID=10 STAMP=947200522
deleted datafile copy
datafile copy file name=/u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_mydata_dmbcp0wz_.dbf RECID=11 STAMP=947200522
Deleted 4 objects


I still need to relocate the TEMP Tablespace Tempfile.

SQL> alter session set container=NEWPDB;

Session altered.

SQL> select file_name from dba_temp_files;


SQL> alter tablespace temp add tempfile '/u03/oradata/NEWPDB/temp01.dbf' size 100M;

Tablespace altered.

SQL> alter tablespace temp drop tempfile '/u02/oradata/ORCL12C/4F793A6D323D1344E0530100007FABC7/datafile/o1_mf_temp_dkj7f8hg_.dbf';

Tablespace altered.

SQL> select file_name from dba_temp_files;



So, I used the same Image Copy method we'd use in 11g databases to relocate a 12c Pluggable Database when within the same server and storage.

To verify that I can access the PDB in the new location :

SQL> connect hemant/hemant@NEWPDB
SQL> select table_name from user_tables;


SQL> insert into obj_list select * from obj_list;

72641 rows created.

SQL> commit;

Commit complete.



Categories: DBA Blogs

Benefits of vendor support for problem resolution

Chris Warticki - Tue, 2017-06-20 13:49

Elaina Stergiades, Research Manager, Software and Hardware Support Services, IDC

As business leaders push for faster adoption of mobile, social and cloud technologies, CIOs and IT managers wrestle with the best approach to embracing these new delivery systems and “keeping the lights on” for important production systems.  With the relentless pace of change in technology in 2017, IDC research shows that most organizations have already started their digital transformation journey across all aspects of the business.  However, the reality is that for most companies, IT must integrate these new technologies with extensive production systems – and support them for the foreseeable future.  At the same time, CIOs must meet demanding internal and external service level agreements across the technology stack that can directly affect the bottom line – no easy feat in these complex environments.

With so much complex technology supporting revenue generation and customer-facing activities, business leaders rely on the IT organization to make sure that technology is operating at peak efficiency.  It doesn’t take much for technology disruptions to negatively affect the customer experience and the bottom line – and it can be difficult to repair that damage once it occurs.  As a result, CIOs and IT managers are looking for assurance from support providers that can minimize potentially harmful downtime when problems affect IT systems.

IDC believes that support directly from the original software or hardware vendors can be a good choice when navigating these complex technology landscapes.  Many patches for important software problems and any security update can only be delivered from the original vendor, which can be critical for enterprises with strict compliance requirements. In addition, IT organizations typically try to figure out technology problems on their own when problems first occur.  This self-diagnosis and resolution can be much faster and easier than always reaching out for help – and original vendors can include advanced tools and utilities in the code that other providers can’t offer. 

For CIOs and IT managers considering vendor support to enable accelerated problem resolution, IDC recommends looking for the following support features:

Immediate access to the latest updates and patches when problems are resolved, including security updates – a critical consideration with growing security threats

An extensive library of tools and information for self-diagnosis and resolution, including online portals, knowledge bases with advanced search capabilities, and a robust peer-to-peer community

Advanced remote diagnostic capabilities, including tools with direct ties into the software to isolate and identify complex software issues

Direct access to support staff at the original software vendor when problems occur

Well-established partner networks in support delivery, with the ability to support complex integrations of technology from multiple hardware and software providers

When considering vendor support offerings and packages, IDC also recommends looking for comprehensive offerings that can assure broad access to these deliverables.  Ideally, the support provider will offer add-on services as needed, both for additional support activities and non-traditional support related activities (like optimization services and upgrade services).   IDC research has shown that IT organizations spend considerable time managing contracts and relationships with vendors and suppliers, and anything that can help minimize that time is helpful.  As IT environments grow more complex integrating on-premises and cloud solutions, support deliverables that can help speed problem resolution will be critical for resource-strapped IT organizations.

Elaina Stergiades is the Research Manager for IDC's Software Support Services program. In this position, she provides insight and analysis of industry trends and market strategies for software vendors supporting applications, development environment and systems software. Elaina is also responsible for research, writing and program development of the software support services market.

Prior to joining IDC, Elaina spent 10 years in the software and web design industries. As a quality assurance engineer at Parametric Technology and Weather Services International (WSI), she led testing efforts for new applications and worked closely with customers to design and implement new functionality. Elaina also worked in product marketing at WSI, directing an initiative to launch a new weather crawl system. More recently, she was a project manager at Catalyst online. At Catalyst, Elaina was responsible for managing client search marketing campaigns targeting increased website traffic, revenue and top search engine rankings.

Elaina has a B.S. in mechanical engineering from Cornell University and an M.B.A. from Babson College.


Subscribe to Oracle FAQ aggregator