Skip navigation.

Feed aggregator

In-Memory Column Store: 10046 May Be Lying to You!

Pythian Group - 4 hours 22 min ago

The Oracle In-Memory Column Store (IMC) is a new database option available to Oracle Database Enterprise Edition (EE) customers. It introduces a new memory area housed in your SGA, which makes use of the new compression functionality brought by the Oracle Exadata platform, as well as the new column oriented data storage vs the traditional row oriented storage. Note: you don’t need to be running on Exadata to be able to use the IMC!


Part I – How does it work?

In this part we’ll take a peek under the hood of the IMC and check out some of its internal mechanics.

Let’s create a sample table which we will use for our demonstration:

create table test inmemory priority high
select a.object_name as name, rownum as rn,
sysdate + rownum / 10000 as dt
from all_objects a, (select rownum from dual connect by level <= 500)

Almost immediately upon creating this table, the w00? processes will wake up from sleeping on the event ‘Space Manager: slave idle wait’ and start their analysis to check out the new table. By the way, the sleep times for this event are between 3 and 5 seconds, so it’s normal if you experience a little bit of a delay.

The process who picked it up will then create a new entry in the new dictionary table compression$, such as this one:

SQL> exec pt('select ts#,file#,block#,obj#,dataobj#,ulevel,sublevel,ilevel,flags,bestsortcol, tinsize,ctinsize,toutsize,cmpsize,uncmpsize,mtime,spare1,spare2,spare3,spare4 from compression$');
TS# : 4
FILE# : 4
BLOCK# : 130
OBJ# : 20445
DATAOBJ# : 20445
ILEVEL : 1582497813
TINSIZE : 16339840
TOUTSIZE : 9972219
MTIME : 13-may-2014 23:14:46
SPARE1 : 31
SPARE2 : 5256
SPARE3 : 571822

Plus, there is also a BLOB column in compression$, which holds the analyzer’s findings:

SQL> select analyzer from compression$;

004B445A306AD5025A0000005A6B8E0200000300000000000001020000002A0000003A0000004A(output truncated for readability)

A quick check reveals that this is indeed our object:

SQL> exec pt('select object_name, object_type, owner from dba_objects where data_object_id = 20445');

PL/SQL procedure successfully completed.

And we can see the object is now stored in the IMC by looking at v$im_segments:

SQL> exec pt('select * from v$im_segments');
INMEMORY_SIZE : 102301696
BYTES : 184549376
CON_ID : 0

PL/SQL procedure successfully completed.

Thus, we are getting the expected performance benefit of it being in the IMC:

SQL> alter session set inmemory_query=disable;

Session altered.

Elapsed: 00:00:00.01
SQL> select count(*) from test;


Elapsed: 00:00:03.96
SQL> alter session set inmemory_query=enable;

Session altered.

Elapsed: 00:00:00.01
SQL> select count(*) from test;


Elapsed: 00:00:00.13

So far, so good.

Part II – Execution Plans

Some things we need to be aware of, though, when we are using the IMC in One of them being that we can’t always trust in the execution plans anymore.

Let’s go back to our original sample table and recreate it using the default setting of INMEMORY PRIORITY NONE.

drop table test purge

create table test inmemory priority none
select a.object_name as name, rownum as rn,
sysdate + rownum / 10000 as dt
from all_objects a, (select rownum from dual connect by level <= 500)

Now let’s see what plan we’d get if we were to query it right now:

SQL> explain plan for select name from test where name = 'ALL_USERS';


SQL> @?/rdbms/admin/utlxpls

Plan hash value: 1357081020

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 614 | 12280 | 811 (73)| 00:00:01 |
|* 1 | TABLE ACCESS INMEMORY FULL| TEST | 614 | 12280 | 811 (73)| 00:00:01 |

Predicate Information (identified by operation id):

1 – inmemory(“NAME”=’ALL_USERS’)

14 rows selected.

Okay, you might say now that EXPLAIN PLAN is only a guess. It’s not the real plan, and the real plan has to be different. And you would be right. Usually.

Watching the slave processes, there is no activity related to this table. Since it’s PRIORITY is NONE, it won’t be loaded into IMC until it’s actually queried for the first or second time around.

So let’s take a closer look than, shall we:

SQL> alter session set tracefile_identifier='REAL_PLAN';

Session altered.

SQL> alter session set events ’10046 trace name context forever, level 12′;

Session altered.

SQL> select name from test where name = ‘ALL_USERS’;

Now let’s take a look at the STAT line on that tracefile. Note: I closed the above session to make sure that we’ll get the full trace data.

PARSING IN CURSOR #140505885438688 len=46 dep=0 uid=64 oct=3 lid=64 tim=32852930021 hv=3233947880 ad='b4d04b00' sqlid='5sybd9b0c4878'
select name from test where name = 'ALL_USERS'
PARSE #140505885438688:c=6000,e=10014,p=0,cr=2,cu=0,mis=1,r=0,dep=0,og=1,plh=1357081020,tim=32852930020
EXEC #140505885438688:c=0,e=58,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=32852930241
WAIT #140505885438688: nam='SQL*Net message to client' ela= 25 driver id=1650815232 #bytes=1 p3=0 obj#=20466 tim=32852930899
WAIT #140505885438688: nam='direct path read' ela= 13646 file number=4 first dba=21507 block cnt=13 obj#=20466 tim=32852950242
WAIT #140505885438688: nam='direct path read' ela= 2246 file number=4 first dba=21537 block cnt=15 obj#=20466 tim=32852953528
WAIT #140505885438688: nam='direct path read' ela= 1301 file number=4 first dba=21569 block cnt=15 obj#=20466 tim=32852955406

FETCH #140505885438688:c=182000,e=3365871,p=17603,cr=17645,cu=0,mis=0,r=9,dep=0,og=1,plh=1357081020,tim=32857244740
STAT #140505885438688 id=1 cnt=1000 pid=0 pos=1 obj=20466 op='TABLE ACCESS INMEMORY FULL TEST (cr=22075 pr=22005 pw=0 time=865950 us cost=811 size=12280 card=614)'

So that’s still the wrong one right there, and the STAT line even clearly shows that we’ve actually done 22005 physical reads, and therefore likely no in-memory scan, but a full scan from disk. There’s clearly a bug there with the execution plan reported, which is plain wrong.

Thus, be careful about using INMEMORY PRIORITY NONE, as you may not get what you expect. Since the PRIORITY NONE settings may also be overridden by any other PRIORITY settings, your data may get flushed out of the IMC, even though your execution plans will say otherwise. And I’m sure many of you know it’s often not slow response times on queries which cause a phone ringing hot. It’s inconsistent response times. This feature, if used inappropriately will pretty much guarantee inconsistent response times.

Apparently, what we should be doing is size up the In Memory Column store appropriately, to hold the objects we actually need to be in there. And make sure they’re always in there by setting a PRIORITY of LOW or higher. Use CRITICAL and HIGH to ensure the most vital objects of the application are populated first.

There was one other oddity that I noticed while tracing the W00? processes.

Part III – What are you scanning, Oracle ?

The m000 process’ trace file reveals many back-to-back executions of this select:

PARSING IN CURSOR #140670951860040 len=104 dep=1 uid=0 oct=3 lid=0 tim=23665542991 hv=2910336760 ad='fbd06928' sqlid='24uqc4aqrhdrs'
select /*+ result_cache */ analyzer from compression$ where obj#=:1 and ulevel=:2

They all supply the same obj# bind value, which is our table’s object number. The ulevel values used vary between executions.

However, looking at the related WAIT lines for this cursor, we see:

WAIT #140670951860040: nam='direct path read' ela= 53427 file number=4 first dba=18432 block cnt=128 obj#=20445 tim=23666569746
WAIT #140670951860040: nam='direct path read' ela= 38073 file number=4 first dba=18564 block cnt=124 obj#=20445 tim=23666612210
WAIT #140670951860040: nam='direct path read' ela= 38961 file number=4 first dba=18816 block cnt=128 obj#=20445 tim=23666665534
WAIT #140670951860040: nam='direct path read' ela= 39708 file number=4 first dba=19072 block cnt=128 obj#=20445 tim=23666706469
WAIT #140670951860040: nam='direct path read' ela= 40242 file number=4 first dba=19328 block cnt=128 obj#=20445 tim=23666749431
WAIT #140670951860040: nam='direct path read' ela= 39147 file number=4 first dba=19588 block cnt=124 obj#=20445 tim=23666804243
WAIT #140670951860040: nam='direct path read' ela= 33654 file number=4 first dba=19840 block cnt=128 obj#=20445 tim=23666839836
WAIT #140670951860040: nam='direct path read' ela= 38908 file number=4 first dba=20096 block cnt=128 obj#=20445 tim=23666881932
WAIT #140670951860040: nam='direct path read' ela= 40605 file number=4 first dba=20352 block cnt=128 obj#=20445 tim=23666924029
WAIT #140670951860040: nam='direct path read' ela= 32089 file number=4 first dba=20612 block cnt=124 obj#=20445 tim=23666962858
WAIT #140670951860040: nam='direct path read' ela= 36223 file number=4 first dba=20864 block cnt=128 obj#=20445 tim=23667001900
WAIT #140670951860040: nam='direct path read' ela= 39733 file number=4 first dba=21120 block cnt=128 obj#=20445 tim=23667043146
WAIT #140670951860040: nam='direct path read' ela= 17607 file number=4 first dba=21376 block cnt=128 obj#=20445 tim=23667062232

… and several more.

Now, compression$ contains only a single row. Its total extent size is neglibile as well:

SQL> select sum(bytes)/1024/1024 from dba_extents where segment_name = 'COMPRESSION$';


So how come Oracle is reading so many blocks ? Note that each of the above waits is a multi-block read, of 128 blocks.

Let’s take a look at what Oracle is actually reading there:

pt('select segment_name, segment_type, owner
from dba_extents where file_id = 4
and 18432 between block_id and block_id + blocks - 1');


PL/SQL procedure successfully completed.

There’s our table again. Wait. What ?

There must be some magic going on underneath the covers here. In my understanding, a plain select against table A, is not scanning table B.

If I manually run the same select statement against compression$, I get totally normal trace output.

This reminds me of the good old:

SQL> select piece from IDL_SB4$;
ORA-00932: inconsistent datatypes: expected CHAR got B4

But I digress.

It could simply be a bug that results in these direct path reads being allocated to the wrong cursor. Or it could be intended, as it’s indeed this process’ job to analyze and load this table, and using this the resource usage caused by this is instrumented and can be tracked?

Either way, to sum things up we can say that:

- Performance benefits can potentially be huge
- Oracle automatically scans and caches segments marked as INMEMORY PRIORITY LOW|MEDIUM|HIGH|CRITICAL (they don’t need to be queried first!)
- Oracle scans segments marked as INMEMORY PRIORITY NONE (the default) only after they’re accessed the second time – and they may get overridden by higher priorities
- Oracle analyzes the table and stores the results in compression$
- Based on that analysis, Oracle may decide to load one or the other column only into IMC, or the entire table, depending on available space, and depending on the INMEMORY clause used
- It’s the W00? processes using some magic to do this analysis and read the segment into IMC.
- This analysis is also likely to be triggered again, whenever space management of the IMC triggers again, but I haven’t investigated that yet.

Categories: DBA Blogs

Using Eclipse (OEPE) to Develop Applications using WebSocket and JSON Processing API with WebLogic Server 12.1.3

Steve Button - 10 hours 56 min ago
Following from my last posting, I thought I'd also show how Eclipse (OEPE) makes the new Java EE 7 APIs available from Oracle WebLogic Server 12.1.3.

The first step was downloading and installing the Oracle Enterprise Pack for Eclipse (OEPE) distribution from OTN.

Firing up Eclipse, the next step is to add a new Server type for Oracle WebLogic Server 12.1.3, pointing at a local installation.

With that done, I then created a new Dynamic Web Project that was directed to work against the new WebLogic Server type I'd created.  Looking at the various properties for the project, you can see that the WebSocket 1.0 and JSON Programming 1.0 libraries are automatically picked up and added to the Java Build Path of the application, by virtue of being referenced as part of the WebLogic System Library.

Into this project, I then copied over the Java source and HTML page from my existing Maven project, which compiled and built successfully.

For new applications using these APIs, Eclipse will detect the use of the javax.websocket API and annotations, the javax.json API calls and so forth and present you with a dialog asking you if you want to import the package to the class to resolve the project issues.

 With the application now ready, selecting the Run As > Run on Server menu option launches WebLogic Server, deploys the application and opens an embedded browser instance to access the welcome page of the application.

And there's the test application built in Eclipse using the WebSocket and JSON Processing APIs running against WebLogic Server 12.1.3.

Enkitec Extreme Exadata Expo 2014

Doug Burns - 11 hours 56 min ago

(Otherwise known as #E42014 to the Twitterati. Note to the casual reader ... like a lot of conference posts, this is more personal diary entry than having any tech content whatsoever. You have been warned.)

Yes, so this post is hopelessly late, but I really have been busy this time!

Although I feel like I've done quite a few presentations over the past year, a lot of them have been at client sites and the conferences have been a little more spaced out so it felt good to be back in the wild a couple of months ago, particularly as it was going to be my last conference before moving to Singapore for work (more on that later). It was supposed to be a packed week with E4 covering the first few days and OUGF the last few days. Happily for me, @HeliFromFinland was good enough to understand that one conference would steal a lot less Singapore preparation time and the refundable travel made it attractive too. Thanks, Heli! (Although when reading the #OUGF14 tweets in between packing, I'm wasn't so sure!)

I was lucky enough to be able to use some frequent flyer miles to upgrade both my flights to DFW, which was nice. But with no slides to work on and with an unusually low desire to watch movies, I mainly ate, read, slept and drank and the flight seemed to zip by. I believe I might have had one cocktail too many so that, by the time Jason Osborne picked me up in his extremely nice sports car, I could barely form a sentence. (I may be exaggerating slightly) I was nursing a Mojito slowly by the time everyone else was ready to meet at the hotel bar. It turned out to be the first of quite a few (no idea how that happened!) and although I was in bed relatively early, I felt like death when I woke up. I think that's the first time that's ever happened.

Sunday was all about registering and getting a nice Enkitec speaker goodie bag before settling in for an afternoon of Tanel Poder presenting on Exadata Internals and Advanced Performance Metrics - two 90 minute sessions - and although he was splendid as usual, I ended up only managing the first couple of hours before I really had to *get some food* and *sleep*. I got the first half done but the second consisted largely of me lying in a hotel room in a zombified state trying to rise myself for the speakers dinner that Enkitec had laid on


Embedded image permalink

By the time they were done, though, I was able to catch up with them as they returned for a far more sedate last few beers and an early night in bed.

One of the aspects I like most about E4 is that it's available as a webinar for virtual attendees at a reasonable cost and, because it's recorded, I can watch presentations that I might have missed the first time later, when I have some down-time. It means it's much easier to attend for those who can't get travel permission and also that, if you are particularly interested in presentations I just touch on here, anyone can register and see them all for themselves! (and no, I'm not on commission, I just think some of this stuff should have a wider audience than it already has ...)

One of the presentations that you're probably not likely to see at too many other conferences (I'm certainly not familiar with it) was the initial keynote - Exadata: The Untold Story of a Startup within Oracle with Kodi Umamageswaran who is VP Exadata Product Development at Oracle. I know that lining up this particular speaker took a lot of work from the organisers and it was well worth it for some classic keynote stuff. An entertaining and wide-ranging look at the pre-history of Exadata (SAGE) development, some of the key stages since and a look at the most recent developments. I thought it hit just the right level of being technical enough to be entertaining for a tech crowd, but without getting too bogged down in any one area.

Next up was Tom Kyte's keynote, titled What Needs to Change, during which he talked about some of the many changes in performance expectations and system capabilities in the time he's been working with Oracle and a good sprinkling of the content from the Real World Performance Days he does with Andrew Holdsworth and Graham Wood. My favourite bit of this is always when he talks about how 100% CPU usage is a bad idea for an OLTP-type system (essentially a very bad idea for response times) because I've had so many people at customer sites quote me some Tom Kyte thread or another suggesting that 100% CPU usage is a great thing, because you've paid for that capacity after all. Erm, yes, sure it is for some workloads but possibly a more important Tom Kyte quote is ... 'it depends'!

I wanted to attend Exadata Resource Manager Deep Dive by Akshay Shah of Oracle, but felt I needed to skip at least one session to eat something (Fajitas!) and prepare for my own. Having lunch with Cary Millsap, I was surprised to hear that he would be an Enkitec employee from that day (with a nice sideline in books and tools and training too, of course). It strikes me that this is a good move for all concerned. Nobody sane wouldn't want Cary on board and those Accenture people can help sell the Method-R skills into as many customers as possible. Good move by Kerry Osborne, if you ask me.

I always look forward to watching Tyler Muth present and this time it was a continuation of his central areas of interest these days - High Throughput Computing on Exadata - which was a collection of tips and a sense of the approach he takes when working with large ETL processes and the like. Encouragingly for me, they were very similar to some client work I did a year or two ago at a client, presented at last year's OUG Finland conference and it's always good to know you're heading in the right direction. I remember saying I would write some blog posts about that too! Maybe some day ... I could watch Tyler present all day, whether he sticks to his planned ideas or goes a little off-piste because he always has something interesting to say in an entertaining way and feels like a kindred spirit. (With less swearing perhaps! ;-)) One take-away I noticed was that he recommended the ODA sizing document as a nice guide to stop people over-consolidating their workloads. e.g. Do you really want 42 databases on a 8 core server? I think it was the tables in this section that he was talking about.

Next up was my presentation which I think went well from a presentation perspective and contained some hopefully useful real world DBaaS experiences but I think the problem with that particular presentation is that it's not really technical enough on the one hand and on the other I *want* to keep it light. The truth is that I could probably talk sensibly about the subject for three hours so I've walked away thinking about what I didn't say! Still, it wasn't too bad and Martin Bach seemed to like it, which was comforting :-) 

Not too long later, I had somehow been roped into taking part in a Hadoop vs. Oracle Database Panel with my old mate Alex Gorbachev as the moderator and Tanel Poder, Eric Sammer and Kerry Osborne debating the strengths and weaknesses of the two approaches and whether the RDBMS is on it's way out as a useful technology for most data analysis tasks. I must confess I'm not the greatest fan of panel sessions because you can never get into enough detail and argue the case properly but at least not everyone agreed, although that might have been the beer and jetlag combining in my case :-) Eventually we split into two groups to try to illustrate the different design approaches to a problem from an Oracle or a Hadoop perspective, so I roped in an impossibly young and smart backup team of Martin Back and Karl Arao. I probably still managed to talk over them though ;-) I suppose it was all a good bit of knockabout fun and came with a free beer attached, so I mustn't grumble! I guess I walked away still thinking it's horses for courses ...

Embedded image permalink

Embedded image permalink
That finished off day one nicely for me as I felt I'd both learned a few things and had fun at the same time. The fun is always guaranteed but I rarely learn as much as I do at E4.

The keynote the following morning was probably worth the price of admission alone. The Exadata SmartScan Deep Dive delivered by Roger MacNicol (who is a Consulting Engineer working on Smart Scan at Oracle) was precisely the type of presentation techies are looking for but with the added value that his presentation style and slides are as excellent as his content. It's so long ago now that I'm a bit short on details but plan to watch it again and, if you get a chance to hear Roger speak at a future conference (are you listening UKOUG?) then you should grab it with both hands, feet and whatever else you have at your disposal!

Maria Colgan, on the other hand, was far too busy working on the upcoming launch of Oracle In-Memory or something to bother about actually attending conferences to deliver her keynotes in person! ;-) Which wasn't a big deal for me personally as I've already heard a *lot* about IMO but it was a good opportunity for me to take the p*** out of her on Twitter!

My last full presentation of the day before I had time for a few beers and to head to the airport was Think Exa! with Martin Bach & Frits Hoogland, which was a collection of a handful of subject areas they highlighted as being worth thinking carefully about during initial implementations on Exadata servers. At first I thought Mr. Bach was going to do all the talking, but they did take it in lengthy alternating sections, which worked really well and set me up nicely for a last few beers with Martin, Frits and soon-to-be-Oak-Table-Network-member Alex Fatkulin, who I've known electronically for a while but only get to see at E4.

It's the second time I've been able to attend E4 in person (the other I attended remotely) and it's quickly become one of my favourite conferences. Sure, it's organised by good people doing a great job and some are friends, but I think as well as taking good care of people, Kerry Osborne's contacts within Oracle on top of Enkitec's consultants and customers merge together to help create a truly special agenda. I'd thoroughly recommend attending if you have the opportunity, even virtually.

Thanks again to everyone at Enkitec for another top job organising this thing and for giving me the opportunity for another trip to Dallas!

However, the complete highlight of the conference for me was getting to spend some decent time with a happy and healthy-looking Peter Bach, which amazed me after the health problems he's faced this year. Good to have you back, Peter, and I bet Oracle are glad to have you back working for them too! LOL

List of PeopleSoft Blogs

Jim Marion - 12 hours 19 min ago
It has been amazing to watch the exponential growth in the number of PeopleSoft community bloggers. It seems that most of them have links to other PeopleSoft blogs, but where is the master list of PeopleSoft blogs? Here is my attempt to create one. Don't see your blog on the list? Add a comment and I'll review your blog. If your blog is education oriented, then I will put it in the list... and probably delete your comment (that way you don't have to feel awkward about self promotion). There are some PeopleSoft related blogs that I really wanted to include, but they just weren't educational (more marketing than education). I suppose some could say that the Oracle blogs I included were primarily marketing focused. That is true. I included them, however, because those product release announcements are so valuable.
I have not read all of these blogs. I can't, don't, and won't attest to the quality of the content in those blogs. Each reader should evaluate the content of these blog posts before implementing suggestions identified in these blogs and their corresponding comments.

    Developing with the WebSocket and JSON Processing API with WebLogic Server 12.1.3 and Maven

    Steve Button - 16 hours 47 min ago
    Oracle WebLogic Server 12.1.3 provides full support for Java EE 6 and also adds support for a select set of APIs from Java EE 7.

    The additional APIs are:
    • JSR 356 - Java API for WebSocket 1.0
    • JSR 353 - Java API for JSON Processing
    • JSR 339 - Java API for RESTful Web Services 2.0
    • JSR 338 - Java Persistence API 2.1
    See the "What's New in 12.1.3 Guide" at for more general information.

    At runtime, the WebSocket and JSON Processing APIs are available as defaults and don't require any form of post installation task to be performed to enable their use by deployed applications.

    On the other hand, the JPA and JAX-RS APIs require a step to enable them to be used by deployed applications.

    Developing with the WebSocket and JSON Processing APIs using Maven To create applications with these APIs for use with Oracle WebLogic Server 12.1.3, the API needs to be made available to the development environment.  Typically when developing Java EE 6 applications, the javax:javaee-web-api artifact is used from the following dependency:

    As the WebSocket and JSON Processing APIs are not part of the Java EE 6 API, they need to be added to the project as dependencies.

    The obvious but incorrect way to do this is to change the javax:javaee-web-api dependency to be version 7 so that they are provided as part of that dependency.  This introduces the Java EE 7 API to the application, including APIs such as  Servlet 3.1, EJB 3.2 and so forth which aren't yet supported by WebLogic Server.  Thus it presents the application developer with APIs to use that may not be available on the target server.

    The correct way to add the WebSocket and JSON Processing APIs to the project is to add individual dependencies for each API using their individual published artifacts.



    Using NetBeans, these dependencies can be quickly and correctly added using the code-assist dialog, which presents developers with options for how to resolve any missing classes that have been used in the code.


     Using the JSON Processing API with WebSocket Applications  The JSON Processing API is particularly useful for WebSocket application development since it provides a simple and efficient API for parsing JSON messages into Java objects and for generating JSON from Java objects.  These tasks are very typically performed in WebSocket applications using the Encoder and Decoder interfaces, which provides a mechanism for transforming custom Java objects into WebSocket messages for sending and converting WebSocket messages into Java objects.

    An Encoder converts a Java object into a form able to send as a WebSocket message, typically using JSON as the format for use within Web browser based JavaScript clients.
    package buttso.demo.cursor.websocket;

    import java.util.logging.Level;
    import java.util.logging.Logger;
    import javax.json.Json;
    import javax.json.JsonObject;
    import javax.websocket.EncodeException;
    import javax.websocket.Encoder;
    import javax.websocket.EndpointConfig;

    * Convert a MouseMessage into a JSON payload.
    * @author sbutton
    public class MouseMessageEncoder implements Encoder.Text{

    private static final Logger logger = Logger.getLogger(MouseMessageEncoder.class.getName());

    public String encode(MouseMessage mouseMessage) throws EncodeException {
    logger.log(Level.FINE, mouseMessage.toString());
    JsonObject jsonMouseMessage = Json.createObjectBuilder()
    .add("X", mouseMessage.getX())
    .add("Y", mouseMessage.getY())
    .add("Id", mouseMessage.getId())
    logger.log(Level.FINE, jsonMouseMessage.toString());
    return jsonMouseMessage.toString();

    public void init(EndpointConfig ec) {
    // throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.

    public void destroy() {
    // throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.


    An Decoder takes a String from a WebSocket message and turns it into a custom Java object, typically receiving a JSON payload that has been constructed and sent from a Web browser based JavaScript client.
    package buttso.demo.cursor.websocket;

    import java.util.logging.Level;
    import java.util.logging.Logger;
    import javax.json.Json;
    import javax.json.JsonObject;
    import javax.websocket.Decoder;
    import javax.websocket.EndpointConfig;

    * Converts a JSON payload into a MouseMessage
    * @author sbutton
    public class MouseMessageDecoder implements Decoder.Text {

    private static final Logger logger = Logger.getLogger(MouseMessageDecoder.class.getName());

    public MouseMessage decode(String message) {
    logger.log(Level.FINE, message);
    JsonObject jsonMouseMessage = Json.createReader(new StringReader(message)).readObject();
    MouseMessage mouseMessage = new MouseMessage();
    logger.log(Level.FINE, mouseMessage.toString());
    return mouseMessage;

    public boolean willDecode(String string) {
    return true;

    public void init(EndpointConfig ec) {
    // throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.

    public void destroy() {
    // throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.


    The Encoder and Decoder implementations are specified as configuration elements on a WebSocket Endpoint (server and/or client) and are automatically invoked to perform the required conversion task.
    @ServerEndpoint(value = "/mouse", decoders = MouseMessageDecoder.class, encoders = MouseMessageEncoder.class)
    public class MouseWebSocket {

    private final Logger logger = Logger.getLogger(MouseWebSocket.class.getName());


    public void onMessage(Session peer, MouseMessage mouseMessage) throws EncodeException {
    logger.log(Level.FINE, "MouseMessage {0} from {1}", new Object[]{mouseMessage, peer.getId()});

    for (Session others : peer.getOpenSessions()) {
    try {
    if (!others.getId().equals(peer.getId())) {
    mouseMessage.setId((int) peer.getUserProperties().get("id"));
    } catch (IOException ex) {
    Logger.getLogger(MouseWebSocket.class.getName()).log(Level.SEVERE, null, ex);


    This example enables MouseMessage objects to be used in the WebSocket ServerEndpoint class to implement the required functionality and allow them to be transmitted in JSON format to and from clients. On the JavaScript client, the JSON representation is used to receive MouseMessages sent from the WebSocket Endpoint and to send MouseMessages to the same WebSocket Endpoint.

    The JavaScript JSON API can be used to produce JSON representation of JavaScript objects as well as parse JSON payloads into JavaScript objects for use by the application code. For example, JavaScript logic can be used to send messages to WebSocket endpoints in JSON form using the JSON.stringify function and to create JavaScript objects from JSON messages received from a WebSocket message using the JSON.parse function.


    document.onmousemove = function(e) {
    if (tracking) {
    // send current mouse position to websocket in JSON format
    ws.send(JSON.stringify({X: e.pageX, Y: e.pageY}));

    ws.onmessage = function(e) {
    // convert JSON payload into JavaScript object
    mouseMessage = JSON.parse(;

    // create page element using details from received
    // MouseMessage from the WebSocket
    point = document.createElement("div"); = "absolute"; = mouseMessage.Id; = mouseMessage.X + "px"; = mouseMessage.Y + "px"; = colors[mouseMessage.Id];
    point.innerHTML = "∗";
    When running the application, the mouse events are captured from the Web client, send to the WebSocket endpoint in JSON form, converted into MouseMessages, decorated with an ID representing the client the message came from and then broadcast out to any other connect WebSocket client to display.

    A very crude shared-drawing board. 

    Simulatenous drawing in browser windows using WebSocket and JSON Processing API

    To see how illogical the Brookings Institution report on student loans is, just read the executive summary

    Michael Feldstein - 18 hours 54 min ago
    il·log·i·cal i(l)ˈläjikəl/ adjective
    1. lacking sense or clear, sound reasoning.  ((From Google’s definition))

    There have been multiple articles both accepting the Brookings argument that “typical borrowers are no worse off now than they were a generation ago” and those calling out the flaws in the Brookings report. I have written two articles here and here criticizing the report. The problem is that much of the discussion is more complicated that it needs to be. A simple reading of the Brookings executive summary exposes just how illogical the report is.

    College tuition and student debt levels have been increasing at a fast pace for at least two decades. These well-documented trends, coupled with an economy weakened by a major recession, have raised serious questions about whether the market for student debt is headed for a crisis, with many borrowers unable to repay their loans and taxpayers being forced to foot the bill.

    The argument is set up – yes, tuition and debt levels are going up, but how is a crisis defined? It’s specifically about “many borrowers unable to repay their loans”. Is there a crisis? That’s not a bad setup, and it is a valid question to address.

    Our analysis of more than two decades of data on the financial well-being of American households suggests that the reality of student loans may not be as dire as many commentators fear. We draw on data from the Survey of Consumer Finances (SCF) administered by the Federal Reserve Board to track how the education debt levels and incomes of young households evolved between 1989 and 2010. The SCF data are consistent with multiple other data sources, finding significant increases in average debt levels, but providing little indication of a significant contingent of borrowers with enormous debt loads.

    This is an interesting source of data. Yes, the New York Fed’s Survey of Consumer Finances tracks student debt, but this data is almost four years old due to triennial survey method. 1

    But hold on – now we’re talking about “significant contingent of borrowers with enormous debt loads”? I thought the issue was ability to repay. What does “enormous” even mean other than being a scary word?

    First, we find that roughly one-quarter of the increase in student debt since 1989 can be directly attributed to Americans obtaining more education, especially graduate degrees. The average debt levels of borrowers with a graduate degree more than quadrupled, from just under $10,000 to more than $40,000. By comparison, the debt loads of those with only a bachelor’s degree increased by a smaller margin, from $6,000 to $16,000.

    Fair enough point to start, noting that a quarter of debt growth comes from higher levels of education including grad school. Average debt loads have gone up more than 2.5x for undergrads, and that certainly sounds troublesome given the report’s main point of “no worse off”. Using the ‘but others are worse off, so this is not as bad’ argument, Brookings notes that grad students had their debt go up by 4x. The argument here appears to be that 2.5 is less than 4.2

    Second, the SCF data strongly suggest that increases in the average lifetime incomes of college-educated Americans have more than kept pace with increases in debt loads. Between 1992 and 2010, the average household with student debt saw in increase of about $7,400 in annual income and $18,000 in total debt. In other words, the increase in earnings received over the course of 2.4 years would pay for the increase in debt incurred.

    Despite the positioning of the report that a small portion of borrowers skews the data and coverage, Brookings resorts to using the mythical “average household”. For that mythical entity, they certainly seem to have the magical touch to not pay any taxes and obtain zero-interest loans.3

    Nonetheless, we’ve now changed the issue again – first by ability to repay, then whether the loan is “enormous”, and now based on how long a mythical payoff takes.

    Third, the monthly payment burden faced by student loan borrowers has stayed about the same or even lessened over the past two decades. The median borrower has consistently spent three to four percent of their monthly income on student loan payments since 1992, and the mean payment-to-income ratio has fallen significantly, from 15 to 7 percent. The average repayment term for student loans increased over this period, allowing borrowers to shoulder increased debt loads without larger monthly payments.

    Small issue, but we’ve now gone from average household as key unit of measurement to median borrower? Two changes from one paragraph to the other – average to median and household to borrower?

    OK, now we have replaced the scary “enormous” with “borrowers struggling with high debt loads”. Although not in the executive summary, the analysis of the report seems to define these large debts as $100,000 or more. Doesn’t it matter who the borrower is? A humanities PhD graduate working as an adjunct for $25,000 a year might view $20,000 debt as enormous.

    Brookings introduces a new measure, and this one does at least take into account the difference in borrowers: payment-to-income ratios of median borrowers. If I’m reading the argument correctly (this took a while based on key measures and terms changing paragraph to paragraph), not only should there be no crisis, but the situation might actually be improving.

    These data indicate that typical borrowers are no worse off now than they were a generation ago, and also suggest that the borrowers struggling with high debt loads frequently featured in media coverage may not be part of a new or growing phenomenon. The percentage of borrowers with high payment-to-income ratios has not increased over the last 20 years—if anything, it has declined.

    So I was reading it correctly: “typical borrowers are no worse off” and the percentage of borrowers with high ratios has declined.4 The only problem, however, is that if we go back to the original setup of the issue, “many borrowers unable to repay their loans”, there might be a much more direct measurement. How about actually seeing if borrowers are failing to repay their loans (aka being delinquent)?

    The Brookings report does not analyze loan delinquency at all - the word “default” is only mentioned three times – once referring to home mortgages and twice referring to interest rates (not once for the word “delinquent”). What do actual delinquency rates show us?

    It turns out that we can go to the same source of data and find out. Here is the New York Fed report from late 2013:


    D’oh! It turns out that real borrowers with real tax brackets paying off off real loans are having real problems. The percentage at least 90 days delinquent has more than doubled in just the past decade. In fact, based on another Federal Reserve report, the problem is much bigger for the future, “44% of borrowers are not yet in repayment, and excluding those, the effective 90+ delinquency rate rises to more than 30%”.

    More than 30% of borrowers who should be paying off their loans are at least 90 days delinquent? It seems someone didn’t tell them that their payment-to-income ratios (at least for their mythical average friends) are just fine and that they’re “no worse off”.

    Back to the Brookings report:

    This new evidence suggests that broad-based policies aimed at all student borrowers, either past or current, are likely to be unnecessary and wasteful given the lack of evidence of widespread financial hardship. At the same time, as students take on more debt to go to college, they are taking on more risk. Consequently, policy efforts should focus on refining safety nets that mitigate risk without creating perverse incentives.

    Despite the flawed analysis that changed terms, changed key measures, and failed to look at any data on delinquencies, Brookings now calls out a “lack of evidence of widespread financial hardship”. How can we take their recommendations seriously when the supporting analysis is fundamentally illogical?

    At least the respectable news organizations will do basic checking of the report before parroting such flawed analysis.

    The worries are exaggerated: Only 7% of young adults with student debt have $50,000 or more.

    — David Leonhardt (@DLeonhardt) June 24, 2014

    ICYMI=>The Student Debt Crisis Is Being Manufactured To Justify Debt Forgiveness #tcot #taxes

    — Jeffrey Dorfman (@DorfmanJeffrey) July 5, 2014


    1. Also note that we’re skipping the years with the highest growth in student debt.
    2. This argument also ignores or trivializes the issue that grad students are indeed students.
    3. There is no other way to get to the 2.4 year payoff.
    4. And yet another change – from average to median to typical.

    The post To see how illogical the Brookings Institution report on student loans is, just read the executive summary appeared first on e-Literate.

    Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

    Pete Finnigan - Tue, 2014-07-29 14:35

    My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

    Posted by Pete On 23/07/14 At 08:44 PM

    Categories: Security Blogs

    Integrating PFCLScan and Creating SQL Reports

    Pete Finnigan - Tue, 2014-07-29 14:35

    We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

    Posted by Pete On 25/06/14 At 09:41 AM

    Categories: Security Blogs

    Automatically Add License Protection and Obfuscation to PL/SQL

    Pete Finnigan - Tue, 2014-07-29 14:35

    Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

    Posted by Pete On 17/04/14 At 03:56 PM

    Categories: Security Blogs

    Twitter Oracle Security Open Chat Thursday 6th March

    Pete Finnigan - Tue, 2014-07-29 14:35

    I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

    Posted by Pete On 05/03/14 At 10:17 AM

    Categories: Security Blogs

    PFCLScan Reseller Program

    Pete Finnigan - Tue, 2014-07-29 14:35

    We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

    Posted by Pete On 29/10/13 At 01:05 PM

    Categories: Security Blogs

    PFCLScan Version 1.3 Released

    Pete Finnigan - Tue, 2014-07-29 14:35

    We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

    Posted by Pete On 18/10/13 At 02:36 PM

    Categories: Security Blogs

    PFCLScan Updated and Powerful features

    Pete Finnigan - Tue, 2014-07-29 14:35

    We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

    Posted by Pete On 04/09/13 At 02:45 PM

    Categories: Security Blogs

    Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

    Pete Finnigan - Tue, 2014-07-29 14:35

    It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

    Posted by Pete On 28/08/13 At 05:04 PM

    Categories: Security Blogs

    Is IoT a boon or a bane for companies?

    Chris Foot - Tue, 2014-07-29 13:58

    The Internet of Things has been a hot topic of conversation among IT professionals as of late. 

    Promises of more unique insight into customer behavior have tempted consumer-focused companies to invest in the technology. Manufacturers are looking to implement intelligent devices to achieve higher levels of productivity. However, a number of organizations are ignoring the impact IoT will have database security. 

    A hacker's haven 
    It turns out cybercriminals are just as interested in IoT as multi-billion dollar corporations are. ZDNet noted a study conducted by Hewlett-Packard's Fortify division, which scanned 10 of the most prevalent Internet-connected devices, discovering about 25 faults per implementation. The source acknowledged some of the most telling discoveries:

    • 90 percent of the devices assessed contained at least one piece of personal information pertaining to an individual
    • Weak credentials and persistent cross-site scripting plagued six out of 10 mechanisms
    • 80 percent of implementations failed to allow users to employ intricate, lengthy passwords
    • 70 percent of devices didn't protect communications with encryption, while 60 percent of such machines lacked the programs necessary to launch encoding tasks

    Essentially, it wouldn't be too difficult for even a fledgling hacker to gain access to a company's IoT assets, establish a network connection with its databases and steal information from the business. Database active monitoring can deter such attempts, but a wide distribution of Internet-connected property can make such a task difficult for in-house IT departments to perform. 

    Where's the issue?
    Consumer-focused IoT devices are particularly vulnerable to sustaining damaging cyberattacks because they're so ubiquitous. Yet again, it's important to ask why IoT implementations are so defensively weak in the first place. 

    Re/code contributor Arik Hesseldahl identified two factors as the culprits of IoT instability:

    1. Manufacturers are rushing to get these products to market without giving enough attention to security features. 
    2. The majority of these devices run the Linux operating system, which is already prone to a number of defensive shortcomings.

    One of the only ways to guarantee hackers aren't infiltrating these assets is by protecting company databases from malware that may be attempting to enter servers through the mechanisms. Why is this backend surveillance necessary? Because the devices themselves don't have the same protective software PCs, tablets and even many smartphones possess. 

    The scale of the problem? Hesseldahl referenced statistics from Gartner, which discovered 26 billion individual devices are going to be online by 2020. Essentially, there's a massive pool of property cybercriminals could exploit in order to steal financial information. 

    The post Is IoT a boon or a bane for companies? appeared first on Remote DBA Experts.

    Solid Conference San Francisco 2014: Complete Video Compilation

    Surachart Opun - Tue, 2014-07-29 08:17
    Solid Conference focused on the intersection of software and hardware. It's great community with Software and Hardware. Audiences will be able to learn new idea to combine software and hardware. It gathered idea from engineers, researchers, roboticists, artists, founders of startups, and innovators.
    Oreilly launched HD videos (Solid Conference San Francisco 2014: Complete Video Compilation Experience the revolution at the intersection of hardware and software—and imagine the future) for this conference. Video files might huge for download. It will spend much time. Please Use some download manager programs for help.
    After watched, I excited to learn some things new with it (Run times: 36 hours 8 minutes): machines, devices, components and etc.

    Written By: Surachart Opun
    Categories: DBA Blogs

    The Nature of Digital Disruption

    WebCenter Team - Tue, 2014-07-29 08:10
    by Dave Gray, Entrepreneur, Author & Consultant

    Digital Disruption – The change that occurs when new digital technologies and business models affect the value proposition of existing goods and services or bring to market an entirely new innovation.

    Why is the shift to digital so disruptive?

    As a global society, we are currently in the process of digitizing everything. We are wrapping our physical world with a digital counterpart, a world of information, which parallels and reflects our own. We want to know everything we can think of about everything we can think of.

    This whirl of digital information changes the playing field for businesses, because digital information does not abide by any of the rules that we are used to in business. 

    In a digital world, products and services have no physical substance. There are no distribution costs. A single prototype can generate an infinite number of copies at no cost. And since the products and services are so different, the environment around them becomes unstable; as the digital layer interacts with the physical layer, everything in the ecosystem is up for grabs. Suddenly new products become possible and established ones become obsolete overnight.

    Science-fiction writer Arthur C. Clarke once said that “Any sufficiently advanced technology is indistinguishable from magic.”

    In the business world today, you are competing with sorcerers. You need to learn magic.

    Let’s take the music industry as an example of how technology changes the playing field. Music used to be very expensive to record and distribute. Every time a new technology comes along, the music industry has had to adjust.

    The graph on the left shows units sold in the music industry, by media, since 1973. See the overlapping curves? Each technology has a lifecycle – early in the lifecycle sales are low, but they rise as more people adopt the technology. When a new technology comes along the older technologies suffer. But not to worry, people still need their music, right? Typically the lifecycle curve for “units sold” closely echoes the revenue curve.

    But when the product becomes purely digital – when it enters the realm of magic – the cost of making and distributing the product plummets to nearly zero. This means more people can produce and distribute music, more cheaply and easily. More music becomes available to the public and purchases skyrocket – but the price per unit drops precipitously.

    Take a look at the two graphs below. The left chart is units sold and the right one is revenue. Note how digital downloads (units sold) have skyrocketed, while the revenue curve is the smallest in years. 

    The core issue is that even though unit sales rise rapidly, the price per unit drops so much faster that the revenue from sales fails to make up the difference. The industrial-age company, which has built its business model on the high costs of producing and distributing physical products, now has a high-cost infrastructure which is suddenly obsolete. What was once an asset is now a critical liability. This opens the entire industry to new players who can offer services to this new world at a dramatically lower cost.

    The product is now digital. So the album, which you once charged $15 for, now retails for about $10. Ouch. You just lost a third of your revenue. But it gets worse. In the old days you sold music by the album, because the cost to make and distribute single songs on CD kept the cost of singles relatively high. So people would buy albums which contained a lot of songs, it now appears, that they didn’t really want. The chart below compares the typical mix between album and single sales on CD vs. downloads. The product mix has flipped completely, from most people buying albums for $15, to most people buying songs for $1.

    So the revenue per unit drops once again. Even with some people buying albums, the average revenue per unit is about $1.50. That means your entire industry has lost about 90% of your revenue, almost overnight. 

    In the world of manufacturing we talk about efficiency and productivity. You look to efficiency to decrease your costs and productivity to increase your revenue. In between you seek to make a profit. But you can’t streamline yourself to profits when the world is changing around you so profoundly. You need different strategies, different tactics.

    The digital revolution is the biggest shift in the music industry since the 1920’s, when phonograph records replaced sheet music as the industry’s profit center.

    What’s going on here? First, the means of making and distributing the product change. Suddenly the costs are so low that thousands of new competitors enter the market. Every artist can now compete with you from his or her garage, bringing new meaning to the word “garage band.”

    But as if that weren’t bad enough, this also changes the things that people buy and the way they buy them. It’s a cascading effect.

    So who wins and how do they win? Let’s look at Apple’s iTunes strategy. Apple looked at the entire industry as an ecosystem – people buy music and they play it on a device. If they like the experience they buy more music. In time they might buy another device, and so on, and so on. This is not a business process, it’s a business cycle.

    Sony had everything that Apple had – in fact, much more. They had a powerful music-player brand, the Walkman, the established industry leader for portable music players. They had more engineers. They had a music division with 21 record labels. 

    Sony’s divisions, which worked in their favor for efficiency and productivity, worked against them when it came to collaboration and innovation. The company was divided into separate operating units which competed with each other internally, making it difficult to collaborate on projects that spanned across multiple units. Sony was a classic industrial-age company, focused on productivity and efficiency.

    What did Apple do that Sony didn’t? They focused on the system, not the product.

    If you want to record your own music, Apple makes the software for that. If you want to sell your music, you can sell it on iTunes. If you want to play it, Apple makes the device. In case you hadn’t noticed, Apple had to look at the entire ecosystem of the record industry through a new, digital lens, including:

    1. Understand the digital infrastructure and how it changed the playing field.
    2. Relentless focus on user experience – simplicity, “just works” design, delight customers.
    3. Smart partnerships: Apple began by giving away the money: Record companies made 70 cents on every 99 cent purchase, with the rest split between artists and merchandising costs.
    4. Interoperability: Apple chose to support an open format that would work with any player, while Sony chose a proprietary format for their first digital media player.

    In short: 

    Think creatively. Understand, provide for, and support the entire ecosystem. Fill in the gaps when you can. Eliminate middlemen if you can – partner with them if you must. Partner with value providers (like artists and record companies that own large repositories of music). Be fearless about cannibalizing your own core business – if you’re not doing, it somebody else is.

    The core difference is between an industrial, manufacturing-based model which focuses on efficiency and productivity – making more widgets more efficiently, and an information-based model which focuses on creativity and innovation. The industrial model thrives on successful planning and logistics, while the information model thrives on systems thinking, rapid learning and adaptation to a changing environment.

    What can you do? As a company, you will need to innovate differently. That’s the subject of my next post, which we will discuss next week.  

    In the meantime, you can hear more from Dave on Digital Disruption in our Digital Business Thought Leaders webcast "The Digital Experience: A Connected Company’s Sixth Sense". 

    Create Windows Service for Oracle RAC

    Pythian Group - Tue, 2014-07-29 08:08

    It’s my first time on RAC system for Windows and I’m happy to learn something new to share.

    I created a new service for database (restoredb) only to find out the ORACLE_HOME for the service is “c:\\oracle\\product\\10.2.0\\asm_1″

    Any ideas as to what was wrong?

    C:\\dba_pythian>set oracle_home=C:\\oracle\\product\\10.2.0\\db_1
    C:\\dba_pythian>echo %ORACLE_HOME%
    C:\\dba_pythian>oradim -NEW -SID restoredb -STARTMODE manual
    Instance created.
     1 STOPPED agent11g1Agent                                    c:\\oracle\\app\\11.1.0\\agent11g
     2 STOPPED agent11g1AgentSNMPPeerEncapsulator                c:\\oracle\\app\\11.1.0\\agent11g\\bin\\encsvc.exe
     3 STOPPED agent11g1AgentSNMPPeerMasterAgent                 c:\\oracle\\app\\11.1.0\\agent11g\\bin\\agntsvc.exe
     4 RUNNING +ASM1                                             c:\\oracle\\product\\10.2.0\\asm_1
     5 RUNNING ClusterVolumeService                              C:\\oracle\\product\\10.2.0\\crs
     6 RUNNING CRS                                               C:\\oracle\\product\\10.2.0\\crs
     7 RUNNING CSS                                               C:\\oracle\\product\\10.2.0\\crs
     8 RUNNING EVM                                               C:\\oracle\\product\\10.2.0\\crs
     9 STOPPED JobSchedulerDWH1                                  c:\\oracle\\product\\10.2.0\\db_1
    10 STOPPED JobSchedulerRMP1                                  c:\\oracle\\product\\10.2.0\\db_1
    11 RUNNING OraASM10g_home1TNSListenerLISTENER_PRD-DB-10G-01  C:\\oracle\\product\\10.2.0\\asm_1
    12 STOPPED OraDb10g_home1TNSListener                         c:\\oracle\\product\\10.2.0\\db_1
    13 STOPPED ProcessManager                                    "C:\\oracle\\product\\10.2.0\\crs"
    14 RUNNING DWH1                                              c:\\oracle\\product\\10.2.0\\db_1
    15 RUNNING RMP1                                              c:\\oracle\\product\\10.2.0\\db_1
    16 RUNNING agent12c1Agent                                    C:\\agent12c\\core\\
    17 RUNNING restoredb                                         c:\\oracle\\product\\10.2.0\\asm_1
    18 STOPPED JobSchedulerrestoredb                             c:\\oracle\\product\\10.2.0\\asm_1

    Check the PATH variable to find HOME for ASM is listed before DB.


    Create database service specifying the fullpath to oradim from the DB HOME

    C:\\dba_pythian>oradim -DELETE -SID restoredb
    Instance deleted.
     1 STOPPED agent11g1Agent                                    c:\\oracle\\app\11.1.0\\agent11g
     2 STOPPED agent11g1AgentSNMPPeerEncapsulator                c:\\oracle\\app\11.1.0\\agent11g\\bin\\encsvc.exe
     3 STOPPED agent11g1AgentSNMPPeerMasterAgent                 c:\\oracle\\app\11.1.0\\agent11g\\bin\\agntsvc.exe
     4 RUNNING +ASM1                                             c:\\oracle\\product\\10.2.0\\asm_1
     5 RUNNING ClusterVolumeService                              C:\\oracle\\product\\10.2.0\\crs
     6 RUNNING CRS                                               C:\\oracle\\product\\10.2.0\\crs
     7 RUNNING CSS                                               C:\\oracle\\product\\10.2.0\\crs
     8 RUNNING EVM                                               C:\\oracle\\product\\10.2.0\\crs
     9 STOPPED JobSchedulerDWH1                                  c:\\oracle\\product\\10.2.0\\db_1
    10 STOPPED JobSchedulerRMP1                                  c:\\oracle\\product\\10.2.0\\db_1
    11 RUNNING OraASM10g_home1TNSListenerLISTENER_PRD-DB-10G-01  C:\\oracle\\product\\10.2.0\\asm_1
    12 STOPPED OraDb10g_home1TNSListener                         c:\\oracle\\product\\10.2.0\\db_1
    13 STOPPED ProcessManager                                    "C:\\oracle\\product\\10.2.0\\crs"
    14 RUNNING DWH1                                              c:\\oracle\\product\\10.2.0\\db_1
    15 RUNNING RMP1                                              c:\\oracle\\product\\10.2.0\\db_1
    16 RUNNING agent12c1Agent                                    C:\\agent12c\\core\\
    C:\\dba_pythian>dir C:\\oracle\\product\\10.2.0\\db_1\\BIN\\orad*
     Volume in drive C has no label.
     Volume Serial Number is D4FE-B3A8
     Directory of C:\\oracle\\product\\10.2.0\\db_1\\BIN
    07/08/2010  10:01 AM           121,344 oradbcfg10.dll
    07/20/2010  05:20 PM             5,120 oradim.exe
    07/20/2010  05:20 PM             3,072 oradmop10.dll
                   3 File(s)        129,536 bytes
                   0 Dir(s)  41,849,450,496 bytes free
    C:\\dba_pythian>C:\\oracle\\product\\10.2.0\\db_1\\BIN\\oradim.exe -NEW -SID restoredb -STARTMODE manual
    Instance created.
     1 STOPPED agent11g1Agent                                    c:\\oracle\\app\\11.1.0\\agent11g
     2 STOPPED agent11g1AgentSNMPPeerEncapsulator                c:\\oracle\\app\\11.1.0\\agent11g\\bin\\encsvc.exe
     3 STOPPED agent11g1AgentSNMPPeerMasterAgent                 c:\\oracle\\app\\11.1.0\\agent11g\\bin\\agntsvc.exe
     4 RUNNING +ASM1                                             c:\\oracle\\product\\10.2.0\\asm_1
     5 RUNNING ClusterVolumeService                              C:\\oracle\\product\\10.2.0\\crs
     6 RUNNING CRS                                               C:\\oracle\\product\\10.2.0\\crs
     7 RUNNING CSS                                               C:\\oracle\\product\\10.2.0\\crs
     8 RUNNING EVM                                               C:\\oracle\\product\\10.2.0\\crs
     9 STOPPED JobSchedulerDWH1                                  c:\\oracle\\product\\10.2.0\\db_1
    10 STOPPED JobSchedulerRMP1                                  c:\\oracle\\product\\10.2.0\\db_1
    11 RUNNING OraASM10g_home1TNSListenerLISTENER_PRD-DB-10G-01  C:\\oracle\\product\\10.2.0\\asm_1
    12 STOPPED OraDb10g_home1TNSListener                         c:\\oracle\\product\\10.2.0\\db_1
    13 STOPPED ProcessManager                                    "C:\\oracle\\product\\10.2.0\\crs"
    14 RUNNING DWH1                                              c:\\oracle\\product\\10.2.0\\db_1
    15 RUNNING RMP1                                              c:\\oracle\\product\\10.2.0\\db_1
    16 RUNNING agent12c1Agent                                    C:\\agent12c\\core\\
    17 RUNNING restoredb                                         c:\\oracle\\product\\10.2.0\\db_1
    18 STOPPED JobSchedulerrestoredb                             c:\\oracle\\product\\10.2.0\\db_1
    Categories: DBA Blogs

    How SQL Server Browser Service Works

    Pythian Group - Tue, 2014-07-29 08:07

    Some of you may wonder the role SQL browser service plays in the SQL Server instance. In this blog post, I’ll provide an overview of the how SQL Server browser plays crucial role in connectivity and understand the internals of it by capturing the network monitor output during the connectivity with different scenario.

    Here is an executive summary of the connectivity flow:   ExecutiveWorkflow


    Here is another diagram to explain the SQL Server connectivity status for Named & Default instance under various scenarios:


    Network Monitor output for connectivity to Named instance when SQL Browser is running:

    In the diagram below, we can see that an UDP request over 1434 was sent from a local machine (client) to SQL Server machine (server) and response came from server 1434 port over UDP to client port with list of instances and the port in which it is listening:



    Network Monitor output for connectivity to Named instance when SQL Browser is stopped/disabled:

     We can see that client sends 5 requests which ended up with no response from UDP 1434 of server. so connectivity will never be established to the named instance.



    Network Monitor output for connectivity to Named instance with port number specified in connection string & SQL Browser is stopped/disabled:

     There is no call made to the server’s 1434 port over UDP instead connection is directly made to the TCP port specified in the connection string.

    image005  Network Monitor output for connectivity to Default instance when SQL Browser running:

     We can see that no calls were made to server’s 1434 port over UDP in which SQL Server browser is listening.



    Network Monitor output for connectivity to Default instance which is configured to listen on different port other than default 1433 when SQL Browser running:

     We can see that connectivity failed after multiple attempts because client assumes that default instance of SQL Server always listens on TCP port 1433.

    You can refer the blog below to see some workarounds to handle this situation here:

    image007 References:

    SQL Server Browser Service -

    Ports used by SQL Server and Browser Service -

    SQL Server Resolution Protocol Specification -

    Thanks for reading!


    Categories: DBA Blogs

    Oracle Database – Turning OFF the In-Memory Database option

    Marco Gralike - Tue, 2014-07-29 07:03
    So how to turn it the option off/disabled…As a privileged database user: > Just don’t set the INMEMORY_SIZE parameter to a non zero value…(the default...

    Read More