Feed aggregator

Scripted Collection of OS Watcher Files on Exadata

Tyler Muth - Fri, 2012-11-02 13:51
I’ve been working a lot with graphing DB and OS metrics in R. I find it especially useful in Exadata POVs (proof of value) to gather and graph the oswatcher vmstat files for the compute nodes and iostat for the cells. For an example, take a look at this graph (PDF, 168 KB) of what […]
Categories: DBA Blogs, Development

eDVD - From Concept to Implementation

Bradley Brown - Thu, 2012-11-01 21:07
I love the process of starting a new venture.  It's an iterative process where you come up with a concept, take it to the market for feedback, develop a minimally viable product, take that to the market for feedback and the loop continues until you have customers - or you're dead (i.e. out of time, out of money, etc.).  We're very focused on the former - lots and lots of customers!

InteliVideo start around the concept of delivering educational courses online...and allowing people to charge whatever they want to charge for a course and getting to keep 70% of it for themselves (i.e. an education and training monetization marketplace).  After talking to the market (i.e. training people like myself), I learned that many trainers enjoy traveling the world doing their training.  Unlike me, they aren't so interested in making money while they sleep.  My class is available on the InteliVideo site and if you know any trainers who want to make money while they sleep, direct them my way!  Everything is self service!

We then changed our focus to those who had existing video content they were selling (or not selling) - i.e. those people/companies selling DVDs.  After a number of discussions, we then learned that people who produce videos and then want to sell them typically go to a distributor to fulfill their sale needs.  These distributors make their customers (i.e. the producers) pre-pay for about 1000 DVDs to be distributed on average.  If they produce 30 DVDs, at any one time, they will have at least 30,000 DVDs in stock.  Those up-front costs are considerable.  These distributors also require that the producer signs a contract that says they (the distributors) have exclusivity for distribution.  That makes sense, but there is a small catch in that exclusivity.  It typically states that they can't sell their video content in ANY media - i.e. even media that hasn't been invented yet.  Specifically, if they are selling DVDs for them when they signed the contracts in 2008 and then the iPad was invented, changed the world and everyone starts moving to the digital world, these video producers are stuck!  They can't move to the digital world because of their contracts.  In other words, they no longer really own their content.  So we quickly learned that this "long tail" market wasn't a good market to focus on.  All of the "self-service" functionality (to upload, categorize, index, search engine optimize, etc.100s of videos) we built for this market works great however!

Now...this isn't true for everyone who produced video content, but it's true for a large majority of them (i.e. we'll call them the "long tailers."  It was certainly the industry norm of 2008.  Those who had a bit more leverage, actually read the contract and likely disputed that clause.  Or...they simply went to a DVD duplicator (with no handcuffs) and used or built their own sales engine.

As I've said before, good start-ups pivot (and twist and turn) to find their market.  DVD duplicators have a number of customers.  Finding customers who bring you customers is a great thing!  At the same time, most DVD duplicators have not moved into the digital world just yet.  Many that we've met with have seen their sales drop to as little as 10% of what they once were!

Here enters InteliVideo pivot #2.  Sure, we still have a platform that will work for educational training purposes AND we still have a platform that works for those video producers who actually own their content.  We also have built an excellent CRM (customer relationship management) email campaign management system into InteliVideo.  This allows those with video content to email (after all, email still rules the world) a video sampler (or trial subscription) to a prospective customer, to know how much video every prospect watched and take actions based on this information.

DVD duplicators needed an additional piece of functionality, which we've now added to InteliVideo.  This is what we call the eDVD functionality.  DVD duplicators are used to authoring DVDs for their customers.  They want to make the DVD look exactly the way they want it to look.  We quickly learned that many of them saw this as clear differentiation for them.  They not only duplicate the DVD, but they create the look and feel for their customers before doing the duplication.  The (physical) DVD review process alone requires shipping or delivering a physical product today.  Not with an eDVD!  They can of course continue to deliver a physical DVD for any or all of their customers.  But with a simple email, their customers can review an eDVD AND they can now offer an eDVD solution to their customers - transparently through InteliVideo.

For a company like Schooled Film.com that has a physical DVD for their customers, the standard InteliVideo look and feel is something like this:


You can see that the video is at the top, chapters to the right, pricing below, social media below, etc.  However, as we learned from the DVD duplicators, each producer may have their own way that they wish to display the look and feel for their eDVD.  So we created an eDVD authoring tool (through the browser).  This allows schooled to have a page that looks exactly as they want it to look.  For example:


This is powerful functionality for our customers!  Try it out here!

We're seeing some real traction with the DVD duplicators.  If you know someone in that business, point them our way!  We would love to help them move into 2013 with many differentiators over iTunes, Amazon, etc. marketplaces - here's a shortened list of functionality over the iTunes market:

  1. Cross platform - iTunes is for iPhone/iPad only.  Versus InteliVideo, which is available on iPhone, iPad, Android phones, Android Tablets, Google TVs, Yahoo TVs, Roku, your browser, etc.  In other words, it’s truly cross platform.  We also offer “in app purchases” on Android, Roku, iPhones and iPads, but there’s a cost of doing that.
  2. Pricing - We have a LOT of flexibility on pricing.  We can gladly charge the same fee that iTunes (and Android and Amazon) market does – which is 30% of the gross fees.  But we can offer a variety of pricing offerings.  We tell our customers that “if you can dream up a way to charge people, we can implement it.”  In other words, a company like Subway could use our platform for training…and we could charge per employee rather than per “customer.”
  3. White labeled offering - InteliVideo is a white labeled offering – in other words, you can put some HTML code on your website and sell your videos through your site.  It looks like your site, but everything goes through InteliVideo - payment to delivery.  Think of the InteliVideo platform just like you think of  Google Analytics - plug and play.
  4. eDVDs – as mentioned above, we allow you to design your own “skin” for your videos – just like you can author a physical DVD, we allow you to do the same for an eDVD.
  5. Social marketing – we offer a social component to videos – you can post a comment and people can then view a sample of the video before they buy.
  6. Bookmarks and Chapters – we allow you to have any number of videos that you group into an eDVD and you can break up a video into any number of chapters.  Customers can bookmark their own sections in the videos too.
  7. Referral / Affiliate network – we allow you to allow your customers to put some HTML code on their website and they get a referral fee – you set the fee they get (i.e. could be 5% or could be 95%).
  8. Email campaigns / CRM – we have a built-in CRM system that allows you to send out any number of emails to people.  When they click on the unique link in the email, we create an InteliVideo account, send them the details for that account and track everything about that person.  We know what they watched, how long, etc.
  9. Virtual classrooms – you can set up a course through InteliVideo and you can monitor any number of students.  You can watch their progress.
  10. SEO – We integrated with Google Analytics, so you can track your clicks, etc. through them, but we also log every click, etc.


We'll certainly keep you posted on our progress as we continue to pivot, twist and turn and we would love to hear your feedback, comments, questions, and thoughts.

Oracle Forms Look and Feel 1.7.6

Francois Degrelle - Thu, 2012-11-01 04:26
This version allows the developer to display video clips inside or outside the Forms application. The video clip can be loaded from the client, an A.S. virtual directory, an Internet URL and, of course, from a database BLOB column. The movie is played...

Event Processed

Antony Reynolds - Wed, 2012-10-31 19:24
Installing Oracle Event Processing 11g

Earlier this month I was involved in organizing the Monument Family History Day.  It was certainly a complex event, with dozens of presenters, guides and 100s of visitors.  So with that experience of a complex event under my belt I decided to refresh my acquaintance with Oracle Event Processing (CEP).

CEP has a developer side based on Eclipse and a runtime environment.

Server install

The server install is very straightforward (documentation).  It is recommended to use the JRockit JDK with CEP so the steps to set up a working CEP server environment are:

  1. Download required software
    • JRockit – I used Oracle “JRockit 6 - R28.2.5” which includes “JRockit Mission Control 4.1” and “JRockit Real Time 4.1”.
    • Oracle Event Processor – I used “Complex Event Processing Release 11gR1 (11.1.1.6.0)”
  2. Install JRockit
    • Run the JRockit installer, the download is an executable binary that just needs to be marked as executable.
  3. Install CEP
    • Unzip the downloaded file
    • Run the CEP installer,  the unzipped file is an executable binary that may need to be marked as executable.
    • Choose a custom install and add the examples if needed.
      • It is not recommended to add the examples to a production environment but they can be helpful in development.
Developer Install

The developer install requires several steps (documentation).  A developer install needs access to the software for the server install, although JRockit isn’t necessary for development use.

  1. Download required software
    • Eclipse  (Linux) – It is recommended to use version 3.6.2 (Helios)
  2. Install Eclipse
    • Unzip the download into the desired directory
  3. Start Eclipse
  4. Add Oracle CEP Repository in Eclipse
    • http://download.oracle.com/technology/software/cep-ide/11/
  5. Install Oracle CEP Tools for Eclipse 3.6
    • You may need to set the proxy if behind a firewall.
  6. Modify eclipse.ini
    • If using Windows edit with wordpad rather than notepad
    • Point to 1.6 JVM
      • Insert following lines before –vmargs
        • -vm
        • \PATH_TO_1.6_JDK\jre\bin\javaw.exe
    • Increase PermGen Memory
      • Insert following line at end of file
        • -XX:MaxPermSize=256M

Restart eclipse and verify that everything is installed as expected.

Voila The Deed Is Done

With CEP installed you are now ready to start a server, if you didn’t install the demoes then you will need to create a domain before starting the server.

Once the server is up and running (using startwlevs.sh) you can verify that the visualizer is available on http://hostname:port/wlevs, the default port for the demo domain is 9002.

With the server running you can test the IDE by creating a new “Oracle CEP Application Project” and creating a new target environment pointing at your CEP installation.

Much easier than organizing a Family History Day!

Easy way to access JPA with REST (JSON / XML)

Edwin Biemond - Tue, 2012-10-30 15:55
With the release of EclipseLink 2.4, JPA persistence units can be accessed using REST with JSON or XML formatted messages. The 2.4 version supports JPA-RS which is a RESTful API for dealing with JPA. In this blogpost I will show you what is possible with JPA-RS, how easy it is and howto setup your own EclipseLink REST service. This is also possible when you want to expose database tables as SOAP

YCSB Benchmark Results for Other NoSQL DBs

Charles Lamb - Tue, 2012-10-30 15:17

Here's an interesting article on YCSB benchmarks run on Cassandra, HBase, MongoDB, and Riak.  Compare these to the Oracle NoSQL Database YCSB performance test results.

Oracle NoSQL Database Performance Tests Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec



Movember 2012: The ‘stache returns!

Dan Norris - Tue, 2012-10-30 08:16

In 2011, I joined many others in the Movember event for the first time. This is a fund-raising effort where participants grow a mustache for the month of November and collect donations to support men’s health, specifically prostate and testicular cancers. Individuals can participate on their own or as a team, but no matter what you donate, it all goes to the same place. In my first year, I managed to collect $754 from 15 donors! Hopefully, I’ll exceed my previous year’s fundraising this year…just not sure what mustache style will bring in the most money yet?!

To see photo updates of how my ‘stache is coming along and to make donations, go to my page on Movember. Thanks for any donation you can make!

TROUG: Oracle Day 2012 İstanbul

H.Tonguç Yılmaz - Sun, 2012-10-28 14:08
Hüsnü kendi sunumundan bahsetmiş, 15 Kasım’da İstanbul’da gerçekleşecek Oracle Day 2012 toplantısına katılım ücretsiz, katılacaklar başta Hüsnü ‘nün sunumu olmak üzere diğer TROUG sunumlarını kaçırmayın. Not: Turkcell Grup CIO ‘su Sayın İlker Kuruöz ‘ün de aynı gün 10:30 ‘da “Dönüşümsel Bulut Yolculuğu” isimli bir sunumu görünüyor, tüm ajandaya bu bağlantıdan ulaşabilirsiniz.

Using JSON-REST in ADF Mobile

Edwin Biemond - Sun, 2012-10-28 06:39
In the current version of ADF Mobile the ADF DataControls ( URL and WS ) only supports SOAP and JSON-XML. But this does not mean we cannot use JSON. To handle JSON we can use the  RestServiceAdapter and JSONBeanSerializationHelper classes. The RestServiceAdapter will handle the Rest Service and JSONBeanSerializationHelper helps us converting JSON to Java. I made a little ADF Mobile demo based on

Oracle R Enterprise Configuration on Oracle Linux

Husnu Sensoy - Thu, 2012-10-25 09:37

Before starting to deal with large volumes of data problems on Oracle R Enterprise (ORE) you need to perform a couple of configurations over your Oracle Linux and Oracle Database systems. Here is the recipe:

  • Ensure that you have the following lines in your oracle users .bash_profile file
    export R_HOME=/usr/lib64/R
    export PATH=/usr/bin:$PATH
    
  • Ensure that you have already installed libpng.x86_64 and libpng-devel.x86_64 packages on your Oracle Linux otherwise issue to install them.
    yum install libpng.x86_64 libpng-devel.x86_64
    
  • Switch to root and issue R. Once you are in R session, install two prerequisites of ORE:
    install.packages("DBI")
    install.packages("png")
    
  • Ensure that your database is 11.2.0.3 otherwise refer you need to apply several database patches:
  • Go to Oracle R Enterprise Download Page and download Oracle R Enterprise Server Install for Oracle Database on Linux 64-bit (91M) and Oracle R Enterprise Client Supporting Packages for Linux 64-bit Platform (1M)  (ore-server-linux-x86-64-1.1.zip and ore-supporting-linux-x86-64-1.1.zip) under Oracle R Enterprise Downloads (v1.1) section
  • Unzip the file by issuing
    unzip ore-server-linux-x86-64-1.1.zip ore-supporting-linux-x86-64-1.1.zip
    
  • At this point ensure that your database to support Oracle R Enterprise is up and running
  • Execute install.sh in order to create ORE libraries and database objects into SYS and RQSYS schemas.
    cd server
    ./install.sh
    


    Oracle R Enterprise 1.1 Server Installation.

    Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.

    Do you wish to proceed? [yes]

    Checking R ................... Pass
    Checking R libraries ......... Pass
    Checking ORACLE_HOME ......... Pass
    Checking ORACLE_SID .......... Pass
    Checking sqlplus ............. Pass
    Checking ORE ................. Pass

    Choosing RQSYS tablespaces
    PERMANENT tablespace to use for RQSYS [SYSAUX]:
    TEMPORARY tablespace to use for RQSYS [TEMP]:

    Current configuration
    R_HOME = /usr/lib64/R
    R_LIBS_USER = /u01/app/oracle/product/11.2.0/dbhome_1/R/library
    ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
    ORACLE_SID = orcl
    PERMANENT tablespace = SYSAUX
    TEMPORARY tablespace = TEMP

    Installing libraries ......... Pass
    Installing RQSYS ............. Pass
    Installing ORE packages ...... Pass
    Creating ORE script .......... Pass

    NOTE: To use ORE functionality, a database user with RQROLE role,
    a few more grants and synonyms is required. A complete list of
    requirements is available in rquser.sql. There is also a demo
    script demo_user.sh creating a new user RQUSER.

    To use embedded R functionality, an RQADMIN role is required.
    Please, consult the documentation for more information on various
    roles.

    Done

  • Finally install some required R libraries/packages by using install.packages command in R. Ensure that user (root will do that) you will start R has a write permission on /usr/lib64/R/library
    install.packages("/home/oracle/Desktop/server/ORE_1.1_R_x86_64-unknown-linux-gnu.tar.gz", repos = NULL)
    install.packages("/home/oracle/Desktop/server/OREbase_1.1_R_x86_64-unknown-linux-gnu.tar.gz", repos = NULL)
    install.packages("/home/oracle/Desktop/server/OREeda_1.1_R_x86_64-unknown-linux-gnu.tar.gz", repos = NULL)
    install.packages("/home/oracle/Desktop/server/OREgraphics_1.1_R_x86_64-unknown-linux-gnu.tar.gz", repos = NULL)
    install.packages("/home/oracle/Desktop/server/OREstats_1.1_R_x86_64-unknown-linux-gnu.tar.gz", repos = NULL)
    install.packages("/home/oracle/Desktop/server/ORExml_1.1_R_x86_64-unknown-linux-gnu.tar.gz", repos = NULL)
    install.packages("/home/oracle/Desktop/supporting/ROracle_1.1-2_R_x86_64-unknown-linux-gnu.tar.gz", repos = NULL)
    
  • Finally start a R session (ensure that $ORACLE_HOME/lib is in your LD_LIBRARY_PATH before starting R session) and load ORE library
    library(ORE)
    Loading required package: OREbase
    Loading required package: ROracle
    Loading required package: DBI
    
    Attaching package: 'OREbase'
    
    The following object(s) are masked from 'package:base':
    
        cbind, data.frame, eval, interaction, order, paste, pmax, pmin,
        rbind, table
    
    Loading required package: OREstats
    Loading required package: MASS
    Loading required package: OREgraphics
    Loading required package: OREeda
    Loading required package: ORExml
    

My perspective on the Teradata Aster Big Analytics Appliance

Donal Daly - Wed, 2012-10-24 17:50
Aster Big Analytics Appliance 3H

By now, no doubt you have heard the announcement of our new Teradata Aster Big Analytics Appliance. In the interests of full disclosure, I work for Teradata within the Aster CoE in Europe. Prior to joining Teradata,  I was responsible for a large complex Aster environment which was built on commodity servers in excess of 30 TB of usable data with a 24 x 7 style operational environment. So my perspective in this post is from that standpoint and also recalling the time when we went under a hardware refresh and software upgrade.

OK, First of all you procure 30 servers and at least two network switches (for redundancy). When you receive them, it up to your data centre team to rack them and cable them up. Next,  check the firmware on each system is the same, surprise, surprise they aren't, so a round of upgrades later, then you configure the raid controllers. In this case we went for Raid 0 which maximises space, more on that choice later...

Then it is over to the network team, to configure the switches and the VLAN we are going to use. We then put on a basic Linux image on the servers so we can carry out some burn in tests, to make sure all the servers have a similar performance profile. Useful tests, as we found two servers whose raid controllers were not configured correctly. It was the result of human error, I guess manually doing 30 servers can get boring. This burns through a week, before we can install Aster, configure the cluster and bring all the nodes online to start the data migration process. Agreed, this is a one off cost, but in this environment, we are responsible for all hardware issues, network issues, Linux issues, with the Vendor just supporting Aster. Many customers never count that cost or possible outages that might be avoided because of these one off configurations.

We had some basic system management as these are commodity servers but nothing as sophisticated as Teradata Server Management and the Teradata Vital Infrastructure. I like that with the Server management software it allows me to manage 3 clusters within the Rack logically (e.g. Test, Production, Backup). I also like the proactive monitoring, as it is likely they will identify issues prior to them becoming an outage for us, or an issue found with one customer can be checked against all customers. If you build and manage your own environment, you don't get that benefit.

Your next consideration should be when looking at an appliance, is it leading edge and how much thought has gone into the configuration? The Appliance 3H is a major step forward from Appliance 2. From a CPU perspective, it has the very latest processors from Intel, dual 8 core Sandy Bridge @ 2.6GHz. Memory has increased to 256GB. Connectivity between nodes is now provided by Infiniband at 40Gb/s. Disk drives are the newer 2.5 size, enabling more capacity per node. Worker nodes using 900GB, while backup and the Hadoop nodes leveraging larger 3TB drives. RAID 5 for Aster and RAID 6 for the backup and Hadoop nodes. Also with the larger cabinet size, enables better data centre utilisation with the higher density that is possible.

I also like the idea of providing integrated Backup nodes as well, previously Aster just had the parallel backup software only, you had to procure your own hardware and manage it. We also know that all of these components have been tested together, so I am benefiting from their extensive testing, rather than building and testing reference configurations myself.

What this tells me, is that Teradata, can brings advances in hardware quickly to the marketplace. Infiniband will make make an important difference. For example, for very large dimension tables, that I have decided against replicating, joins will run much faster. Also I expect positive impact on Backups. In my previous environment, it took us about 8 hours for a full backup of 30 TB or so. Certainly the parallel nature of their backup software could soak-up all the bandwidth on a 10GB connection, so we had to throttle it back.  On the RAID choices, I absolutely concur with the RAID 5 choice. If I was building my own 30 node cluster again I wouldn't have it any other way. While the replication capabilities in Aster protects me against at least any single node failure, a disk failure, will bring that node out of the cluster, until the disk is replaced and the node is rebuilt and brought back online. When you have 30+ servers each with 8 drives (240+ disk drives) the most common failure will be the disk drive. With RAID 5, you can replace the drive, without any impact on the cluster at all, and you still have the replication capabilities to protect yourself from multiple failures.

I also like the option of being able to have an Hadoop cluster tightly integrated as part of my configuration. For example if I have to store a lot of 'grey data' e.g. log/audit files etc for compliance reasons,  I can leverage a lower cost of storage and still do batch transformations and analysis as required. Bring a working set of data (last year for example) for deeper analytics. With the transparent capabilities of SQL-H, I can extend those analytics into my Hadoop environment as required.

Of course purchasing an appliance, is a more expensive route than procuring it, building and configuring it all yourself. However, most enterprise are not hobbyists, and building this sort of infrastructure, is not their core competence nor is is bringing value to their business. They should be focused on the Time to Value and with the Teradata Aster Big Analytics appliance the time to value will be quick, as everything, is prebuilt, configured, tested and ready to go, to accept data and start performing analytics on it.  As I talk to customers across Europe this message is being well received when you talk through the details.

I'll leave you with this thought, one aspect of big data that I don't hear enough of is Value. To me, the dimension of Volume, Variety, Velocity and Complexity are not very interesting if you are not providing value by means of actionable insights. I believe every enterprise customer needs a discovery platform capable of executing the analytics that can provide them an important competitive edge over their competition. This platform should have the capability to handle structured as well as multi-structured data. It should provide a choice of analytics, whether they be SQL based, MapReduce or statistical functions. It should provide a host of prebuilt functions to enable rapid progress. It should be a platform that can appeal to power users in the business, by having a SQL interface and that will work with their existing visualisation tools to our most sophisticated data scientists, by providing them a rich environment to develop their own custom functions as necessary, while enabling them to benefit both from the power of SQL and Map Reduce to build out these new capabilities. In summary that is why I am so excited to be talking with customers and prospects about Teradata Aster Big Analytics Appliance.






Passoker Online Betting Use of Oracle NoSQL Database

Charles Lamb - Fri, 2012-10-19 09:00

Here's an Oracle NoSQL Database customer success story for Passoker, an online betting house.

http://www.oracle.com/us/corporate/customers/customersearch/passoker-1-nosql-ss-1863507.html

There are a lot of great points made in the Solutions section, but as a developer the one I like the most is this one:

  • Eliminated daily maintenance related to single-node points-of-failure by moving to Oracle NoSQL Database, which is designed to be resilient and hands-off, thus minimizing IT support costs

Blueprints API for Oracle NoSQL Database

Charles Lamb - Fri, 2012-10-19 08:19

Here's an implementation of the Blueprints API for Oracle NoSQL Database.

https://github.com/dwmclary/blueprints-oracle-nosqldb

Blueprints is a collection of interfaces, implementations, ouplementations, and test suites for the property graph data model. Blueprints is analogous to the JDBC, but for graph databases. As such, it provides a common set of interfaces to allow developers to plug-and-play their graph database backend. Moreover, software written atop Blueprints works over all Blueprints-enabled graph databases. Within the TinkerPop software stack, Blueprints serves as the foundational technology for:

  • Pipes: A lazy, data flow framework
  • Gremlin: A graph traversal language
  • Frames: An object-to-graph mapper
  • Furnace: A graph algorithms package
  • Rexster: A graph server

Oracle Forms Web: How to display charts

Francois Degrelle - Fri, 2012-10-19 03:17
I see, more and more often, questions from people that (finally) migrate from C/S to Web Forms versions, and don't know how to display charts (feature that is no longer available in Web version). Of course, the first point is to use the Oracle official...

Are you the dumb money in the Cloud?

William Vambenepe - Fri, 2012-10-19 00:55

Another paper came out measuring the performance diversity of Cloud VMs within the same advertised type. Here’s the paper (PDF), here are the slides (PDF) and here is the video (this was presented back in June but I hadn’t seen it until today’s post in the High Scalability blog). Once again, the research shows a large discrepancy. The authors assert that “by selecting better-performing instances to complete the same task, end-users of Amazon EC2 platform can achieve up to 30% cost saving”.

I’ve heard people describing how they use “instance tasting”. Every time they get a new instance, they run performance tests (either on CPU like this paper, or i/o, or both, depending on how they plan to use the VM). They quickly terminate the instances that don’t perform well, to improve their bang/buck ratio. I don’t know how prevalent that practice is, but clearly the more sophisticated users have ways to game the system optimize their consumption.

But this doesn’t happen in a vacuum. It necessarily increases the likelihood that less sophisticated users will end up with (comparatively over-priced) lower-performing instances. Especially in Clouds with high heterogeneity and which have little unused capacity. It’s like coming to the fruit salad at a buffet and finding all the berries gone. I hope you like watermelon and honeydew.

Wall Street has a term for this. For people who don’t understand the system in details, who don’t have access to insider information and don’t use sophisticated trading technology: the dumb money.

Categories: Other

Exadata Parameter _AUTO_MANAGE_EXADATA_DISKS

Alejandro Vargas - Wed, 2012-10-17 11:50
Exadata auto disk management is controlled by the parameter _AUTO_MANAGE_EXADATA_DISKS.

The default value for this parameter is TRUE.

When _AUTO_MANAGE_EXADATA_DISKS is enabled, Exadata automate the following disk operations:

If a griddisk becomes unavailable/available, ASM will OFFLINE/ONLINE it.
If a physicaldisk fails or its status change to predictive failure, for all griddisks built on it ASM will DROP FORCE the failed ones and DROP the ones with predictive failures.
If a flashdisk performance degrades, if there are griddisks built on it, they will be DROPPED FORCE in ASM.
If a physicaldisk is replaced, the celldisk and griddisks will be recreated and the griddisks will be automatically ADDED in ASM, if they were automatically dropped by ASM. If you manually drop the disks, that will not happen.
If a NORMAL, ONLINE griddisk is manually dropped, FORCE option should not be used, otherwise the disk will be automatically added back in ASM.
If a gridisk is inactivated, ASM will automatically OFFLINE it.
If a gridisk is activated, ASM will automatically ONLINED it.

There are some error conditions that may require to temporarily disable _AUTO_MANAGE_EXADATA_DISKS.

Details on MOS 1408865.1 - Exadata Auto Disk Management Add disk failing and ASM Rebalance interrupted with error ORA-15074.

Immediately after taking care of the problem _AUTO_MANAGE_EXADATA_DISKS should be set back to its default value of TRUE.

Full details on Auto disk management feature in Exadata (Doc ID 1484274.1)
Categories: DBA Blogs

Exadata Parameter _AUTO_MANAGE_EXADATA_DISKS

Alejandro Vargas - Wed, 2012-10-17 11:50



Normal
0




false
false
false

EN-US
X-NONE
HE













MicrosoftInternetExplorer4














DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:"Times New Roman";
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:Arial;
mso-bidi-theme-font:minor-bidi;}

Exadata auto disk management is controlled by the parameter _AUTO_MANAGE_EXADATA_DISKS.

The default value for this parameter is TRUE.

When _AUTO_MANAGE_EXADATA_DISKS is enabled, Exadata automate the following disk operations:

If a griddisk becomes unavailable/available, ASM will OFFLINE/ONLINE it.
If a physicaldisk fails or its status change to predictive failure, for all griddisks built on it ASM will DROP FORCE the failed ones and DROP the ones with predictive failures.
If a flashdisk performance degrades, if there are griddisks built on it, they will be DROPPED FORCE in ASM.
If a physicaldisk is replaced, the celldisk and griddisks will be recreated and the griddisks will be automatically ADDED in ASM, if they were automatically dropped by ASM. If you manually drop the disks, that will not happen.
If a NORMAL, ONLINE griddisk is manually dropped, FORCE option should not be used, otherwise the disk will be automatically added back in ASM.
If a gridisk is inactivated, ASM will automatically OFFLINE it.
If a gridisk is activated, ASM will automatically ONLINED it.

There are some error conditions that may require to temporarily disable _AUTO_MANAGE_EXADATA_DISKS.

Details on MOS 1408865.1 - Exadata Auto Disk Management Add disk failing and ASM Rebalance interrupted with error ORA-15074.

Immediately after taking care of the problem _AUTO_MANAGE_EXADATA_DISKS should be set back to its default value of TRUE.

Full details on Auto disk management feature in Exadata (Doc ID 1484274.1)

Categories: DBA Blogs

Installation & Configuration: Tips & Tricks for installing Weblogic and Oracle Forms 11g slides

Francois Degrelle - Wed, 2012-10-17 01:22
Here is a very interesting blog entry from Mia Urman . It is a treasure for those who (don't want to) fight with that installation steps. See particularly the " Installation & Configuration: Tips & Tricks for installing Weblogic and Oracle Forms 11g"...

Pages

Subscribe to Oracle FAQ aggregator