Feed aggregator

I am still here...

Robert Baillie - Mon, 2007-03-19 17:27
Sorry people, I promise I'm still here and I WILL get round to finishing my text on estimating and answering the request for more info on the database patch runner. I will, I will, I will! The problem is, I've started reading again, and I've started playing on-line poker. Damn it :-) But I'm enjoying it, especially a Cohn book on Agile Estimation and Planning. It is an absolute MUST read. It takes off where the estimation chapter from User Stories Applied left off, and it really doesn't dissappoint. Unfortunately it seems to say an awful lot that I agree with, and was going to form the bulk of my next couple of posts. So if you like what I have to say on the topic, then Mike Cohn is definately worth a read... he goes into a lot more detail than I ever will here! Obviously I'm reading an awful lot on Texas Hold 'em as well... but I'm not going to tell you what 'cause that might take away my advantage ;-)

Producing Estimates

Robert Baillie - Mon, 2007-03-19 17:25
OK, so it's about time I got back into writing about software development, rather than software (or running, or travelling, or feeds) and the hot topic for me at the moment is the estimation process. This topic's probably a bit big to tackle in a single post, so it's post series time. Once all the posts are up I'll slap another up with all the text combined. So – Producing good medium term estimates... I'm not going to talk about the process for deriving a short term estimate for a small piece of work, that's already covered beautifully by Mike Cohn in User Stories Applied, and I blogged on that topic some time ago. Rather I'm going to talk about producing an overall estimate for a release iteration or module. I've read an awful lot on this topic over the last couple of years, so I'm sorry if all I'm doing is plagiarising things said by Kent Beck, Mike Cohn or Martin Fowler (OK, the book's Kent as well, but you get the point), or any of those many people out there that blog, and...

UK hear I come

Andries Hanekom - Thu, 2007-03-15 00:45
First and foremost I would like to apologise for not updating the blog for the past couple of month's, come to think of it, it's my first blog for 2007.

Apart from being extremely busy building a custom bolt-on OAF application for a large multinational, I have been applying to work in the United Kingdom under the government's Highly Skilled Migrant Program (HSMP).

Well I have been accepted and am currently looking for some work before I arrive in the UK on the 1st of May, so any help would be highly appreciated ;-). I am also in the process of creating a new blog to detail my experiences living, working and playing in the UK, so check back soon for a link.

If there are any particular areas of the OAF you would like me to explore more, please drop me a comment and I will consider doing a post.

Call from "Printer Company"

Siva Doe - Tue, 2007-03-13 22:29

This I had to blog about. In India, we get lots of unsolicited calls (in spite of laws against it, there are always some loop holes) from people trying to sell credit cards, insurance, holidays, club memberships and so on.

This morning I get a call from one such person. This call was made to office phone. The conversation went like this.

Me: Hello!
Caller: Good morning, sir! I am calling from HP. Is this Sivakumar?
Me: Yes. What is this about?
Caller: Sir! Is your company with strength 50 - 100 people?
Me: Excuse me? Where are you calling from?
Caller: HP. Hewlett - Packard! Sir! We are offering server and storage solutions for small to medium companies. Is your company's strength between 50 - 100?
Me: {laughing} Do you know anything about Sun Microsystems?
Caller: Sir! You are into software development, right?
Me: {laughing a bit louder} Well. Yes. But please read more about the company before calling them. We are a direct competitor to HP.
Caller: Sorry sir!

Probably, I will write it down as one over enthusiastic call center person who didn't do his home work correctly.

At least, it was a good light hearted start for the day :-)

Apps on Demand - Open Sourced ones too

Siva Doe - Tue, 2007-03-13 16:27


Today, Sun announced a cool feature at the new look Network.com. This is called Application Catalog, where an user can checkout an application, provide their own input and let the network.com compute nodes do the work for you. There are some nice open source applications available for general use. As an end user, you can use these applications. if you are a developer, you could make your application available for other users. Don't have a Solaris x64 system? Well, you can build your application itself in Network.com :-)

This is a huge benefit for Open Source developers as well as ISVs who want to develop applications for the best OS in the world.

Blender  is one such application available for use here. If you have a few blend files that you want to be rendered, you can use the Blender application from the Application Catalog to do the work for you. Meanwhile, your personal compute resource could be used for some other creative work.

Check it out.

Some blog entries already. The last one from my team who bring such applications to Network.com.

http://blogs.sun.com/ontherecord/entry/sun_launches_new_application_jukebox
http://blogs.sun.com/innovation/entry/on_demand_delivery_of_hpc
http://blogs.sun.com/kt/entry/network_com_delivers_on_demand
http://blogs.sun.com/hardik/entry/clustalw_on_sun_grid

BTW, if you like some application that is not available already, you can request for it. Cool, is it not?

Ramblings

Herod T - Mon, 2007-03-12 23:12

I am bored, I am sitting at my desk staring at a very slowly moving tail -f on an rman log copying a production database to test, 52 gig of data, the fun never ends here. It is almost 8pm and I have been here since 7am this morning, let the ramblings begin.

I am an avid follower of Jonathan Lewis and his articles on the oracle optimizer. I have his book "Cost-Based Oracle Fundamentals", and I have actually managed to read the entire thing from cover to cover. I can guarantee to you that most of it went straight in, and straight out leaving little behind. I hope at least that enough of it remains behind to have a positive effect at a later date.

I had an actual case to use Tom Kyte and Jonathan Lewis blog entries on ordering a query, showing a developer he can't rely on the order in a table, because there is no order in a normal heap table. Didn't take much, simply forwarded him the links and let him try to find a way to prove them wrong. I haven't heard back from him.

We have 2 ISV's, lets call the Bob and Doug that have really been causing me grief lately, these are small shops that unfortunately have developed two systems that have become integral to our production. It is amazing, these two companies are located within a few kilometers of each other, they have no idea of each other's existence, but they cause me the same troubles. The troubles are always the same, no apparent in house testing of patches or upgrades, they appear to be under the impression that is what our IT staff is for - testing the ISV's code. Bob is database happy, they keep asking for more and more databases on our side. In our environment (they VPN in) they have a production, a test, a dev, and a QA instance for each province we operate in - total of 12 instances each around 50 gig in size. Now, that doesn't seem like much, but we do absolutely no development internally - none at all. These instances are here basically because they don't have the server space available to have what they think they need so they burden us with the responsibility of keeping the databases backed up and in good working order. The copy I mentioned at the opening is being done on their behalf. We pay them support, yet we maintain their support environment.

Doug on the other hand is amazingly skilled at stalling problem fixes long enough so that the users simply forget and develop work around. When Doug does release patches or upgrades, something always, consistently goes wrong, never during our testing of course. They give us a list of what they changed, we test that, and do a general test of everything else, this particular example is year end stuff. In October they release an update that worked pretty well, they only had to release the update to us for testing 4 times which is a new minimal record, with the maximum being 21 times. We test, users signed off and away we went. January comes along, users do their month end and everything works great. February rolls in and we are doing an internal audit between Doug's system and our financial system, and the auditors notice a rather minor $60K variance. Tracking it back, turns out that in the update in October the developers at Doug's company slightly modified a view that is only used at month/year end, "for performance purposes" and never told us about it. Their solution to the performance problems was removing a rather important table from the query which tracked and accounted for user manual changes in the data.

The user that sent the data should have read the reports and caught it early, so Doug made sure the fault landed solely on the users shoulder. So, the users had to make two correcting entries in the GL. Luckily the discrepancy was small and we didn't have to change our year end results. We are still waiting on Doug to give us a document on any changes necessary to their application for DST.

We are hiring an oracle applications support person and 2 IBM Lotus Notes (shudder) developers. I haven't quite figured out where the management plans to seat them, our cube farm is pretty packed together with us each only getting about a 9 foot square of space. Maybe they think we don't need that easy access to the fire escape, they can cram one in there. What they plan to do with the other two, I do not know. Possibly stack us up, lay some flooring across the tops of the cubes and put the cubes two high. We do have a very tall ceiling.

As for my "shudder" about IBM Notes, I have no bad feelings toward the developers that use Notes, I just hate IBM Lotus Notes, it is simply one of the worst programs ever created. We have the newest version of it (7x) and it still sucks. The only time the IBM team that develops Lotus Notes stops building something that sucks is when they start to build vacuum cleaners.

Oh boy, we need to work on the I/O on this test system, I swear there are gerbils in that server running back and forth with some floppies in their mouths transferring the data between disks. 52% done.

I managed to get management sign off on upgrading an oracle 7 database to oracle 10gR2. Apparently the company that supports the application uses 10gR2 internally, even though the majority of their customers are still on oracle 7. The application is 100% web based using some web language I can't remember so the upgrade is apparently really easy and they are going to supply us with the necessary scripts. That will only leave us one oracle 7 database in production.

uggg... I have to type slower, the RMAN log hasn't moved in minutes.

I have been keeping up with what was happening at Hotsos this year by reading Doug Burn's blog. His house mate of the month is amusing and his technical knowledge and writing style are well above average. I came across Don Burleson's personal blog, I follow the forum he hosts. Well, lets just say that starts a whole new chapter on that fellow for me. I know he needs to plug his and his fellow Rampant press author's books, but come on :).

I see my RSS reader is showing me that David Aldridge has posted again, finally after a very long time.

The Conversion of '07 continues later this week, this is the final test before we have to do it in production. I will write a note or two on how it goes.

Well, enough rambling for now.








Oracle SQL Developer Migration Workbench Early Adopter Release

Donal Daly - Mon, 2007-03-12 09:11
Last week we released on OTN the early adopter release of the Oracle SQL Developer Migration Workbench. You can find more details about it here. It was a very important release for us, and marks the start of a new generation of migration tools.

It is nearly 10 years ago when the original Oracle Migration Workbench was released, we supported migrating SQL Server 6.5 to Oracle8 then. At that time, I believe we were the first to introduce a GUI tool. Previously we had provided a series of migration scripts (shell based + SQL) and a stored procedure converter utility. We went on to add support for Access, Sybase, Informix, DB2 utilizing the same user interface by leveraging our plugin architecture. Over the years we have seen our database competitors and others release similar migration tools for their databases.

With this release, I believe we have made the same dramatic shift again that we did back in 1998. By integrating our migration tool as a extension of SQL developer (our very popular tool for database developers) we have provided our users with a modern intuitive UI tightly integrated into an IDE, that should make users even more productive as they carry out database migrations. I don't believe any of our competitors have delivered such tight integration.

This initial release supports Microsoft SQL Server, Access and MySQL. We are introducing support for migrating Microsoft SQL Server 2005 with this release. These third party databases represents the most popular downloads for our existing Oracle Migration Workbench. We will add further platforms in the future. We have also architected this solution, to make it even easier to extend and leverage the rich core migration functionality that we have developed. We hope that others will also extend this tool going forward adding support for additional databases.

The focus now, is on completing some features which missed the cut for the early adopter release, (more on that in a later post) , getting feedback from our user community and fixing as many reported bugs to ensure the highest possible quality release, when we go production, as SQL Developer 1.2. I encourage you to try it out and provide us with feedback. We have setup a comment application which you can provide us with feedback. You can access it here.

Some of my favorite features of this new release includes:
  • Least privilege migration - You no longer need dba privileges
  • Online Data Move - We have enhanced the online data move and provide parallel data move and the degree of parallelism is configurable
  • New T/SQL parser - We have completing rewritten our T/SQL parser. If I'm honest, it was long over due, but this new parser, provides us with the right foundation for a much greater level of automation in converting complex objects (stored procedures, views, triggers)
  • Translation Scratch Editor - allows for the instant translation of Transact SQL or Microsoft Access SQL to PL/SQL or SQL.
  • Translation Difference Viewer - a color-coded side-by-side viewer to display semantic similarities between the source and translated code.
Looking forward to reviewing the feedback from our user community, getting those missing features completed and getting this new tool to production status as part of SQL Developer 1.2

DST over

Herod T - Sun, 2007-03-11 09:51

Well, the time is passed, no problems related to DST have cropped up. We all breathed a sigh of relief. Some frantic last minute patches were put in on our large JSP application, when I say last minute, I mean just after midnight this morning, a few hours before the time switch.

The only major casualty, which is out of our hands, is our cell phone provider seems to have had some issues. All of the cell phones switch just fine, but none of the blackberries did. Oh well, manual switch of the time and good to go.

DST is over...


for now.



40 Tips From Tom

Robert Vollman - Fri, 2007-03-09 17:09
Everybody learns their lessons, and so will you. The only variable is how expensive the lesson is. While there is no substitute for direct, first-hand experience, the cheapest way to learn a lesson is to benefit from the experience of others.My favourite source of cheap lessons is Ask Tom. I've compiled a sample collection of Tom's Wisdom from just the articles updated in the past week. Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com1

What is EclipseLink?

Omar Tazi - Thu, 2007-03-08 18:57
Hopefully by now most of you know that Oracle has been actively contributing resources and IP to the Eclipse community. Oracle has been an active member of the Eclipse community since its inception and a leading participant in both the Eclipse Web Tools Platform (WTP) and the Technology project. Oracle currently leads the JavaServer Faces tooling, Dali JPA tools and BPEL tools projects. Before diving into to the announcement, I would like to personally thank all the developers they know who they are who spontaneously stopped by the Oracle booth at EclipseCon'07 to tell me how much they thought Oracle is doing a better job of working with the OSS community and how much their perception of Oracle had changed.

So what’s new?

- First, Oracle is now a board member of the Eclipse Foundation.
- Second, Oracle steps up its involvement from simple membership to “Strategic Developer” status. Based on the size of our latest donation (see below) and the level of involvement required for this project and Oracle’s interest in the success of the Eclipse platform we decided to upgrade our status.
- Third, Oracle is donating its award winning Java persistence framework, Oracle TopLink, to the open source community. What’s the big deal TopLink was already donated to the JCP and project Glassfish as well as Spring 2.0? That was TopLink Essentials (TLE) not TopLink. I will post another blog entry soon explaining the difference between TLE and TopLink. Basically Oracle TopLink which has been around for 13 years is hands down the industry's most advanced persistence product with object-to-relational, object-to-XML, and Enterprise Information System data access through all of the major standards, including the Java Persistence API, Java API for XML Binding, Service Data Objects, and the Java Connector Architecture. TopLink supports most databases and most application servers and most development tools.
- Last but not least, based on this major contribution (TopLink source code and test cases), Oracle proposed an Eclipse project to deliver a comprehensive persistence platform. The project’s name is Eclipse Persistence Platform (EclipseLink). EclipseLink will be led by Oracle.

Can you provide more details about EclipseLink? (from the EclipseLink FAQ)

EclipseLink will deliver a number of components (listed below) which together will constitute a solid framework with support for a number of persistence standards. Here is a list of some planned components:
- EclipseLink-ORM will provide an extensible Object-Relational Mapping (ORM) framework with support for the Java Persistence API (JPA). It will provide persistence access through JPA as well as having extended persistence capabilities configured through custom annotations and XML. These extended persistence features include powerful caching (including clustered support), usage of advanced database specific capabilities, and many performance tuning and management options.
- EclipseLink-OXM will provide an extensible Object-XML Mapping (OXM) framework with support for the Java API for XML Binding (JAXB). It will provide serialization services through JAXB along with extended functionality to support meet in the middle mapping, advanced mappings, and critical performance optimizations.
- EclipseLink -SDO will provide a Service Data Object (SDO) implementation as well as the ability to represent any Java object as an SDO and leverage all of its XML binding and change tracking capabilities.
- EclipseLink -DAS will provide an SDO Data Access Service (DAS) that brings together SDO and JPA.
- EclipseLink -DBWS will provide a web services capability for developers to easily and efficiently expose their underlying relational database (stored procedures, packages, tables, and ad-hoc SQL) as web services. The metadata driven configuration will provide flexibility as well as allow default XML binding for simplicity.
- EclipseLink -XR will deliver key infrastructure for situations where XML is required from a relational database. The metadata driven mapping capabilities EclipseLink-ORM and EclipseLink-OXM are both leveraged for the greatest flexibility. Using this approach to XML-Relational access enables greater transformation optimizations as well as the ability to leverage the Eclipse Persistence Platform’s shared caching functionality.
- EclipseLink -EIS provides support for mapping Java POJOs onto non-relational data stores using the Java Connector Architecture (JCA) API.

Oracle's love story with Eclipse seems to be getting stronger, is JDeveloper dead?
I keep getting this question over and over. So before anybody posts it in the comments I will address it. At Oracle we believe in "Productivity with Choice". Oracle remains fully committed to JDeveloper as the IDE of choice for Java and service-oriented architecture development. That said, we are also committed to helping our customers who for whatever reason choose Eclipse for their development. So the answer is crystal clear, JDeveloper is stronger than ever and Oracle will continue to invest in making it better.

These Eclipse-related announcements are yet another proof that Oracle continues to deploy significant efforts to initiate, lead, and contribute technology and resources to the OSS community. Stay tuned for more on Oracle and OSS!

OEM 10gR3

Herod T - Thu, 2007-03-08 09:28

For those of you that remember, about a year ago, we installed and got OEM 10gV2 installed and running, and I was hopeful. Well, about the only thing OEM was used for was downtime reporting by a manager. The occasional email from the system when something came down, but not much more. No new agents installed on new servers, nothing kept up to date, basically OEM 10gV2 was a dismal failure.


Well, we have decided to upgrade to OEM 10gV3 and take another kick at the can and see if we can get OEM configured the way it should be and use it the way it should be. Our big push in this came from our Oracle Technical Sales representative. He came by (for free) and learned our environment over 2 days and then he presented some very compelling reasons for using OEM during a 7 hour presentation. Every single one of the reasons was expensive, but he got the managers convinced.

We will be upgrading (or reinstalling) to V3 by the end of March. Our sales rep gave us a 60 day free trial of all of the OEM packs on all of our servers to get me hooked, as well as 5 full days of the technical sales rep here helping out configuring and the proper way to use it. For the estimated $900K bill (before discount) to have the OEM packs on every database and the non-oracle database servers monitored as well, and our SQL Server databases plugged in too. Oracle is willing to spend some time.

We are also looking at purchasing Oracle Fusion Middleware as our SOA solution, so later this year, oracle is going to make some money on us.

I am now off to a presentation where I am going to try my darndest to convince management that we really and truly need to upgrade our oracle 7 and oracle 8 production databases.



DST... Ready?

Herod T - Thu, 2007-03-08 09:13

Well,

We appear to be ready for the big bad early DST. Yesterday we rolled every single one of our test servers forward and waited for the OS to do the switch. No databases came down (yeah!), unfortunately some of our vendor supplied patches for our large JSP based application seemed to have failed badly as the application simply refused to allow data to be entered "PO create date can not be past PO update date" or something like that.

Now, for the databases where test and production are on the same server - well, that is going to be a "fingers crossed" type of fix.There will still be a large number of IT people in on the morning of the 11th for the old 'just in case'.



Insert into multiple tables from a single query

Herod T - Sat, 2007-03-03 21:52

A friend who does not blog wrote this up for his co-workers, it is straight forward but useful. Enjoy.

A few days ago someone asked if it was possible in an oracle DB to insert into multiple different tables from a single query. I said "Yes of course", they asked "So, How?", I of course said "RTFM". Well, here it is a little easier to read than in "The Fine Manual"

This works all the way back to Oracle 8 so feel free to test it out. But, since it does drop objects, please do it in a test location. Personally I recommend everybody download and install oracle XE ( http://www.oracle.com/technology/software/products/database/xe/htdocs/102xewinsoft.html ) on your PC, gives you a nice safe place to work, play and learn, and as an added bonus Oracle XE comes with Application Express (APEX) already installed and ready to go. Now that I said that, I don't support PC's so who knows what it will change on the configuration for on your PC. Do so at your own risk.


Simply creating some test tables and a sequence for later use in this example.

SQL> CREATE TABLE BASETABLE (BASEID NUMBER PRIMARY KEY,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.03

SQL> CREATE TABLE DEST1 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.01

SQL> CREATE TABLE DEST2 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.03

SQL> CREATE TABLE DEST3 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.06

SQL> CREATE TABLE DEST4 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.03

SQL>

SQL>

SQL> CREATE SEQUENCE DESTID_SEQ;

Sequence created.

Elapsed: 00:00:00.00

SQL>

Insert some data into the base table for use later

SQL> INSERT INTO BASETABLE SELECT ROWNUM*-1,DBMS_RANDOM.STRING('A',30) FROM DUAL CONNECT BY LEVEL <=500;

500 rows created.

Elapsed: 00:00:00.09

SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.00

Now the actual insert, you can see the WHEN and ELSE clause of the INSERT statement. You can have as many of those as you want, each inserting different combination of columns for the VALUES section. In this case, I am using a sequence to satisfy the primary key of the DESTx table and then the two column names from the select clause at the end.

SQL>

SQL> INSERT ALL

2 WHEN BASEID=-1 THEN INTO DEST1 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

3 WHEN BASEID=-10 THEN INTO DEST2 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

4 WHEN BASEID IN (-100,-200,-300,-400) THEN INTO DEST3 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

5 ELSE INTO DEST4 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

6 SELECT BASEID,BASEDATA FROM BASETABLE ORDER BY BASEID DESC;

500 rows created.

Elapsed: 00:00:00.01

SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.00


Now to show what happened. From the following query you can see that the BASEID of -1 was inserted and the DESTID was the very first record in the insert as shown by the sequence value of 1.

This following data was inserted based on the

WHEN BASEID=-1 THEN INTO DEST1 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)line in the insert statement.

SQL> SELECT * FROM DEST1;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

1 -1 uzvIPoJevGslWNzcsEULVsOIHrWtkA

Elapsed: 00:00:00.00



From the following query you can see that the BASEID of -10 was inserted, and was the 10th line in the select query return result. This was inserted based on the line

WHEN BASEID=-10 THEN INTO DEST2 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)in the insert statement.

SQL> SELECT * FROM DEST2;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

10 -10 AzRwrjLpzvxtacxBOitYhGDGDuKmaU

Elapsed: 00:00:00.01


From the following query you can see that the BASEIDs of -100,-200,-300 and -400 were inserted.This was inserted based on the

line WHEN BASEID IN (-100,-200,-300,-400) THEN INTO DEST3 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)in the insert statement.


SQL> SELECT * FROM DEST3;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

100 -100 uJixIEqFTeZEBDOCPYkJgyipInuTdt

200 -200 ikmTNgdjGTjkINEGbxEFifWAetPBMt

300 -300 gKcFyianMOtGzdJzVlkjqaLPiwBkic

400 -400 prucyUxTqhPhUTzarsJRyFQYlOUlWz

Elapsed: 00:00:00.01

From the following query you can see the remainder of the records in the BASETABLE were inserted into the DEST4 table. If you look you can see that BASEID of -1,-10,-100 and -200 are missing. You will have to trust me that -300 and -400 are missing in the result set as well, but I didn't want this running too long.


SQL> SELECT * FROM DEST4 ORDER BY DESTID;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

2 -2 fPNMkRbJAEoeaWejzrAigZjKqZVzUl

3 -3 NDmRQNKmPhAnzfuWhLQDnWIcRVpjLF

4 -4 DoNnVEskItQAfANavQVHdJWdOeZbAc

5 -5 SNacUWsrPCPyLwDBxEtndSsiiSTmPW

6 -6 gLxiVlWXsdcLPhDgLThISCutKBfuOj

7 -7 sZCNlljiTveZPIUgyEBPalpJPrMdck

8 -8 UOwvqNxyPXcpsxRmjsxLQGfEsHQOqO

9 -9 WDwQqUnMHjDautMrYYBMCcjIoNWMKg

11 -11 BOfKwqtFZWQuLVEHFhMRHrfBGyeTfQ

<SNIP>

99 -99 VjmavGgzdQroTHutlhcOQjiqlTiLHW

101 -101 cjuHxrklWRaQmRJZyVShliswLRCgBm

<SNIP>

199 -199 xvaXYHPkexmFOkXCDBOODqjEatyMwY

201 -201 fXwQaaSTWAEDrYDqnRHVxLqcQEkbCZ

<SNIP>

500 -500 eLqsjEKEzWTmQUTsEtHFcRVEkEiQZz

494 rows selected.

Elapsed: 00:00:01.06

Now simply the cleanup.

SQL> DROP SEQUENCE DESTID_SEQ;

Sequence dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE BASETABLE;

Table dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE DEST1;

Table dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE DEST2;

Table dropped.

Elapsed: 00:00:00.04

SQL> DROP TABLE DEST3;

Table dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE DEST4;

Table dropped.

Elapsed: 00:00:00.01

SQL>

SQL> SPOOL OFF


New Look

Herod T - Sat, 2007-03-03 21:09

I decided to finally allow Google to move my blog to the new now no longer beta blogger.

It looks good. New "spot", I decided on a different look.

If you care, let me know if you have any issues with it.

Thanks.

ORA-00821 Specified value of sga_target is too small, needs to be at least

Neil Jarvis - Thu, 2007-03-01 05:42
Resized your SGA_TARGET too small and found you can’t now start your database.

If you are using a PFILE then just edit it and set the SGA_TARGET to a larger value. But what if you’re using an SPFILE. One possibility is to create the pfile from the spfile edit the pfile, and then either start the database using the pfile and remove the spfile and start the database as normal and the new pfile will be picked up.

The problem arises when the spfile is in an ASM, creating the pfile from this can be a problem. One solution is to create a pfile which calls the spfile in the ASM but after the call to the spfile add an extra line which alters the SGA_TARGET as follows

SPFILE='+DATA1/PROD1/spfilePROD1.ora'
SGA_TARGET=1024M

This pfile can be places in $OH/dbs thus, the next time you start the database this pfile will be run. Alternatively, you could explicitly use the ‘pfile=’ parameter when starting the database thus

Startup pfile=$OH/dbs/initPROD1.ora

Using Oracle SQLDeveloper to access SQLServer

Dong Jiang - Tue, 2007-02-27 05:53

It is a pretty cool feature to use Oracle’s SQLDeveper 1.1 to access SQLServer.
The steps are:

  • Download jTDS (open-source SQLServer JDBC driver) from here. Unzip and extract the jtds-1.2.jar or whatever the latest version.
  • Start Oracle’s SQLDeveloper, Tools->Preferences->Database->Third Party JDBC Drivers. Click “Add Entry” and point to the jtds-1.2.jar
  • Create a new connection, choose SQLServer tab, type in hostname, port, username and password. It appears that the initial connection name has to be the same as the database and you can click the “Retrieve database” button. Once you found the database, you can rename the connection.

Try it out.
Of course, certain things don’t work. Like explain plan and auto trace.

Per comments below, please make sure jtds 1.2 is used. Apparently, 1.3 does not work.

powered by performancing firefox


Fun With Tom Kyte

Robert Vollman - Mon, 2007-02-26 13:54
As devoted readers may have noticed, my new job doesn't involve nearly as much work with Oracle. I stay sharp by reading Ask Tom, the very site that has provided me with 90% of the answers that I can't find in Oracle documentation or figure out on my own.Those of you who may find it nerdly to spend lunch hours reading Oracle Q&A are actually really missing out. It's far more entertaining than Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com7

A Strange Production Problem!!!

Vidya Bala - Mon, 2007-02-26 12:11

A Strange Production Problem!!!

I suddenly got a call that the Front end Applications have frozen (those are the worst calls….). I logged on to the database server, was unable to login to the database, at the same time got a call that the ……………….

Network Appliance filer experienced a kernel panic or a low-level system-related lockup. The device then rebooted itself to correct the problem and proceeded normally through the startup process.

The database was a 2node RAC Cluster both accessing the NetApp Device via NFS mount points. After the NetApp rebooted itself:

NodeA on the database looked fine: ORACM was up on the server, could login to the database from NodeA.
NodeB: ORACM was down, Instance on NodeB was down.

Net Result: Application was still unable to connect to either of the Nodes using TAF.

Since the Applications were anyways down, the decision was made to restart the Cluster Manager on both nodes and start both the instances. The above resumed operations fairly quickly (not too much time was spent on roll forward and rollback operations, we did not have any long running transactions at the time of abort).

An SR has been opened to discuss if the above was the expected behavior.

With RAC I would have expected the following to happen:


Each Oracle instance registers with the local Cluster Manager. The Cluster Manager monitors the status of local Oracle instances and propagates this information to Cluster Managers on other nodes. If the Oracle instance fails on one of the nodes, the following events occur:
1. The Cluster Manager on the node with the failed Oracle instance informs the Watchdog daemon about the failure.
2. The Watchdog daemon requests the Watchdog timer to reset the failed node.
3. The Watchdog timer resets the node.
4. The Cluster Managers on the surviving nodes inform their local Oracle instances that the failed node is removed from the cluster.
5. Oracle instances in the surviving nodes start the Oracle9i Real Application Clusters reconfiguration procedure.

The nodes must reset if an Oracle instance fails. This ensures that:
· No physical I/O requests to the shared disks from the failed node occur after the Oracle instance fails.
· Surviving nodes can start the cluster reconfiguration procedure without corrupting the data on the shared disk.

In 9i Cluster Reconfiguring is supposed to be fast remastering resources only if necessary and processes on Node A will be able to resume active work during reconfiguration as their locks and resources need not be moved.

However, this was not the behavior we saw when one node totally crashed in our case – while RAC is great it helps you load balance your requests – does it really help in Disaster Recovery ?

Categories: Development

10g not available on all flavors of Vista

Dong Jiang - Fri, 2007-02-23 08:21

According to this “Statement of Direction“, current plan calls for 32-bit 10gR2 available only for Vista Business, Ultimate and Enterprise Edition.
I guess Microsoft has put Oracle in a hard position by bringing out ridiculous various flavors. But Vista Home, basic and premium, may not be able to install 10g at all.
I am wondering about the XE. I haven’t tried myself, but some claimed to have installed XE on Vista Home.
PS: In response to APC’s comment, I tried XE on Vista Hom Basic and it works.


QEDWiki - introduction video

Rakesh Saha - Fri, 2007-02-23 01:59

Pages

Subscribe to Oracle FAQ aggregator