Feed aggregator

Point

Sergio's Blog - Tue, 2008-09-30 10:10
Categories: DBA Blogs

Search Through Region Source in Application Express

Sergio's Blog - Tue, 2008-09-30 09:46

As I was working to migrate an application and its theme over to Application Express 3.1.2 from 2.1.2 I needed to move some images around and update the references accordingly. I remembered that the Application Builder has a feature to search through region source, which came in handy.

Here's where you can find it:

I could have used this type of feature to search through templates as well. If anyone from the APEX team is reading this...

Categories: DBA Blogs

My OpenWorld 2008: 21 Set

Oracle Apex Notebook - Mon, 2008-09-29 12:31
Today I had the chance to attend 4 sessions. Two of them were Hands on Labs from Oracle Develop. Also, it was nice to put some faces in the names I’m used meet on the net. I met Carl Backstrong, Joel Kallman, David Peake, Dimitri Gielis, Francis Minault, John Scott and Patrick Wolf. Session 1: Hands-on Lab: Extending the Oracle Application Express Framework with Web 2.0 - APEX Development Team
Categories: Development

Oracle, the Innovation Company

Project Directions - Mon, 2008-09-29 11:34

Fresh on the heels of our very successful OpenWorld in San Francisco last week I wanted to highlight this favorable blog post from Joshua Greenbaum at ZDNet.com. 

In it Joshua highlights a few topics that are also of particular interest for Projects customers.  First of all there was a good presentation of the Fusion apps that impressed the author.  To quote him: “Oracle has made good on its promise to deliver Fusion Apps, and has greatly exceeded my expectations in doing so. A very impressive debut.”

Second, he recognized that our strategy for Apps Unlimited is delivering on our promise to continue to enhance the products already being used by our customers.  E-Business Suite (of which Projects is one application) has been enhanced in many ways and we showed and discussed the roadmap for Projects 12.1 at the conference.  Customers can find out more information on Metalink for all of the details on that release.

Joshua also discusses the enhancements to the Oracle Business Intelligence applications.  We presented a paper at OOW this year about the OBIEE solution coming for Project Analytics.  It was very well received and like our customers we are very excited for it’s release.  It will open up a a whole new level of analytics to our customers and give them even greater insight into their projects.

Finally the author touches on the AIA (Application Integration Architecture) strategy of Oracle.  Rolled out last year AIA is Oracle’s toolset to glue all of the acquired products together.  As Joshua writes:

“This product, which orchestrates all the different processes across the vast, and disparate, Oracle Applications stack, is the place where the vision of Oracle becomes reality: There is no way for Oracle to pull off rationalizing its massive acquisition strategy without AIA making all the interprocess communications between, say, Glog, Siebel, Oracle Financials and PeopleSoft HR (and SAP, while we’re at it) seamless, easy, and fast. Absent a highly performant AIA middleware layer, Oracle’s dream of cross application process functionality becomes a user nightmare.”

For Projects, we are no less interested in leveraging AIA than any other part of the company.  We’re excited to be able to tie our applications in with some of the best of breed products that we’ve acquired such as Agile and Hyperion.  We’d very much like to hear from our customers as to which apps you would find most useful to integrate with and provide input on our AIA roadmap.

 


Delete vs. Truncate - graphically

Claudia Zeiler - Sun, 2008-09-28 04:05
This is too funny to not share. Of course, everyone knows that a truncate is much more efficient than a full table delete. On my system, still suffering from a slow log writer, but absolutely quiet at the moment, I ran an insert of a million rows (ctas from tbl1 to tbl2), a delete of those million rows (from tbl2) , and then a truncate of them (from tbl1). I looked over at Enterprise Manager and this is what I saw: (I added the tags for benefit of blog readers)















I had not been expecting such a graphical re-enforcement of the rule. I had found the full table delete in some executing code and was curious how much redo it was generating. I ran René Nyffenegger's script 'how_much_redo' .
Assuming that it is giving accurate results, here is what I got for the 3 operations.

SQL> INSERT INTO T1_DEL (SELECT * FROM T1);

1161874 rows created.

SQL> exec how_much_redo;
New Redo Bytes Written: 408985600 (390 MB of redo TO INSERT)


SQL> delete from T1_DEL;

1161874 rows deleted.

Elapsed: 00:06:30.90 (6 MINUTES TO DELETE)
New Redo Bytes Written: 661496320 (630 MB of redo TO DELETE)



SQL> truncate table T1;

Table truncated.

Elapsed: 00:00:06.80 (6 SECONDS TO TRUNCATE)
New Redo Bytes Written: 815616 (less than 1Mb of redo to TRUNCATE)

My OpenWorld 2008: 25 Set

Oracle Apex Notebook - Sat, 2008-09-27 17:22
The day started too early after the big party the night before. It was that last day of OpenWorld 2008 and there were lots of people going home. Session 1: Dispelling myths about Apex – John Scott I surely didn’t expect to see a full room, being this session at 9 AM and having the big party the night before. I guess Jes is even more popular now that he has his own book :) I bought the book but
Categories: Development

Lazy Log Writer - The non Resolution

Claudia Zeiler - Sat, 2008-09-27 15:46
OK, after the huge emergency, "You can't attend the last day of open world because our problems are too big." After my debugging efforts, how did the problem of the slow writes to the redo log files resolve?

1. Management asked that the entire database be moved to the NAS disks because they initially seemed faster. I had to let them know, that, no, the writes there are not faster.

2. The efforts to cross the great divide and get the storage manager in the other company to actually look at his configuration resulted in, "Since the writes are slow on two different pieces of our machinery, it can't be our fault - it must be oracle. I'm debugging nothing."

3. The other company informed us, "By the way, we will be installing new hardware in a couple of weeks at the same time that you are making a major application upgrade." How does that sound for a prospect of a smooth transition? This was followed by an email, "Claudia, are you working with the other company on this?" Not only am I not working with them, I never heard of it!

4. Management informed me, "Since there will be a hardware change soon, don't bother to follow up on this problem."

So they will install the hardware, we will deploy the new application version, and there will be storm and drama about the excessive waits for redo log writes. I think that I should change my name to Cassandra.

How I Changed My Blog with 3 Column Template and Right & Left Sidebars?

Sabdar Syed - Sat, 2008-09-27 08:34

I have been looking for the steps/code to prepare my blog with 3 column template, but couldn’t get a proper reference for that. Luckily, today I came across a wonder full blog for “3 Column Templates Step by Step Guides”, which will fit to my blog template.

It’s as easy as simple.

Step -1: Before going to change the original template, I gave a try on the test template by creating a new blogspot for testing perpose ( This Step-1 is optional)

Step -2: Take the backup of template before trying out any customization that requires direct modification of the HTML code of original template.

Step -3: Follow the below link.

3 Column Templates : Rounders : Left and Right Sidebars

For other type of templates, then check this

3 Column Templates: Step by Step Guides

Regards,

Sabdar Syed.

My OpenWorld 2008: 24 Set

Oracle Apex Notebook - Fri, 2008-09-26 10:22
It was a big day… Larry Ellison’s keynote with the X thing , my presentation and the Appreciation Event. I also had the opportunity to have preview demo of APEX 4.0 features at the Demo Grounds. Session 1: Soup-to-Nuts RAD development using Oracle SQL Developer and APEX – Mike Hichwa, Kris Rice & David Peake, Oracle I had lots of expectations for this session because of the new SQL Developer
Categories: Development

How We Resolved the Account Locked (Timed) issue?

Sabdar Syed - Fri, 2008-09-26 07:01
An application user account, in one of our Oracle 10g Databases, is being locked every time. Below are our findings and solution to resolve the issue.

Details:
Oracle Database Version:
10g R2 (10.2.0.1)
Application User: APPUSR
Error: ORA-28000: the account is locked

Login as SYSDBA

SQL> conn /as sysdba

Check the APPSUSR account status.

SQL> SELECT username, account_status FROM dba_users WHERE username= ‘APPUSR’;
USERNAME ACCOUNT_STATUS PROFILE
-------------------- -------------------- ---------------
APPUSR LOCKED(TIMED) DEFAULT

Here we can see the account status is LOCKED (TIMED) and the default user’s profile is DEFAULT.

Check the resource limits of DEFAULT profile.

SQL> SELECT resource_name,resource_type,limit FROM dba_profiles WHERE profile='DEFAULT';

RESOURCE_NAME RESOURCE LIMIT
-------------------------------- -------- ----------
COMPOSITE_LIMIT KERNEL UNLIMITED
SESSIONS_PER_USER KERNEL UNLIMITED
CPU_PER_SESSION KERNEL UNLIMITED
CPU_PER_CALL KERNEL UNLIMITED
LOGICAL_READS_PER_SESSION KERNEL UNLIMITED
LOGICAL_READS_PER_CALL KERNEL UNLIMITED
IDLE_TIME KERNEL UNLIMITED
CONNECT_TIME KERNEL UNLIMITED
PRIVATE_SGA KERNEL UNLIMITED
FAILED_LOGIN_ATTEMPTS PASSWORD 10
PASSWORD_LIFE_TIME PASSWORD UNLIMITED
PASSWORD_REUSE_TIME PASSWORD UNLIMITED
PASSWORD_REUSE_MAX PASSWORD UNLIMITED
PASSWORD_VERIFY_FUNCTION PASSWORD NULL
PASSWORD_LOCK_TIME PASSWORD UNLIMITED
PASSWORD_GRACE_TIME PASSWORD UNLIMITED

All resource limits for DEFAULT profile is set to UNLIMITED, but only for FAILED_LOGIN_ATTEPTS attribute, it’s set to some value (10). Due to this the user account keeps getting locked(timed).When we check in the Oracle Documentations, it’s stated that FAILED_LOGIN_ATTEPTS attribute for DEFAULT profile has been changed from 10.2.0.1 from UNLIMITED to 10.

What we can do is, either we may need to change the resource limit for FAILED_LOGIN_ATTEPTS attribute in DEFAULT profile, or create a new profile for that user with FAILED_LOGIN_ATTEPTS attribute value set to UNLIMITED. But for security reasons, we will not tamper the DEFAULT profile, which is not recommended too. Then let’s go for creating a new profile and assign that profile to the user.

Create a profile.

SQL> CREATE PROFILE APPUSR_DEFAULT LIMIT
2 COMPOSITE_LIMIT UNLIMITED
3 SESSIONS_PER_USER UNLIMITED
4 CPU_PER_SESSION UNLIMITED
5 CPU_PER_CALL UNLIMITED
6 LOGICAL_READS_PER_SESSION UNLIMITED
7 LOGICAL_READS_PER_CALL UNLIMITED
8 IDLE_TIME UNLIMITED
9 CONNECT_TIME UNLIMITED
10 PRIVATE_SGA UNLIMITED
11 FAILED_LOGIN_ATTEMPTS UNLIMITED
12 PASSWORD_LIFE_TIME UNLIMITED
13 PASSWORD_REUSE_TIME UNLIMITED
14 PASSWORD_REUSE_MAX UNLIMITED
15 PASSWORD_VERIFY_FUNCTION NULL
16 PASSWORD_LOCK_TIME UNLIMITED
17 PASSWORD_GRACE_TIME UNLIMITED;

Profile created.

Assign the newly created profile to the user as default profile.

SQL> ALTER USER appusr PROFILE appusr_default;

User altered.

Unlock the user account:

SQL> ALTER USER appusr ACCOUNT UNLOCK;

User altered.

Now check again the status of APPUSR user.

SQL> SELECT username, account_status FROM dba_users WHERE username= ‘APPUSR’;
USERNAME ACCOUNT_STATUS PROFILE
-------------------- -------------------- ---------------
APPUSR OPEN APPUSR_DEFAULT

Regards,
Sabdar Syed,
http://sabdarsyed.blogspot.com

Web Cache Compression and MOD_GZIP

Duncan Mein - Fri, 2008-09-26 04:07
Some of my colleagues are working on a project where bandwidth is massively limited (64k). One suggestion to increase application response time was to use MOD_GZIP (an open source compressor extension to Apache) to compress the outbound HTTP traffic. The only drawback is that MOD_GZIP is not supported by Oracle.

Since we are using Oracle Application Server, Web Cache achieves exactly the same by simply adding a compression rule to Web Cache for the URL Regular Expression /pls/apex/.*$

We noticed that without any compression of the HTTP outbound traffic, our test page took 30 seconds to fully render on a 64k link. Turning on compression reduced the rendering time to 7 seconds. Very impressive.

Navigating through an application with compression turned on was noticeably quicker than one without compression.

To test if your outbound HTTP traffic is compressed, I would grab the Live HTTP Headers extension to Firefox and you are looking for a line like: Content-Encoding: gzip in the outbound response.

I configured both APEX and Discoverer Viewer to use compression by following the metalink article: 452837.1

Data Modeling with SQL Developer

Jared Still - Fri, 2008-09-26 00:33
Unlike Open World 2007 there were many database oriented sessions at Oracle Open World 2008. There were many good performance oriented sessions, so many in fact that there were several conflicts in the schedule, and I had to pick one in several time slots that had multiple choices.

One of the more interesting sessions (for me anyway) at OOW 2008 was a session not on database performance, but on data modeling.

The SQL Developer team has been hard at working creating a data modeling plugin for SQL Developer.

This appears to be a very full featured tool, and appears to be the answer to the question "What will replace Oracle Designer?"

While Designer is much more than a data modeling tool, that is one of the core features of the tool, and many folks have used it just for its data modeling capabilities.

The new ERD tool is no lightweight, it is quite full featured from a database modeling and design standpoint.

Some of the features included:
  • Domains generated from data
  • Real logical and physical modeling, not just one model with 2 different names.
  • The ability to reverse engineer several schemas at once and have them appear not only as a master model, but each individually as a sub model.
  • Sub model views may be created on sets of objects as well.
  • The tool can determine all tables related to a table through FKs and create a sub model based on that set.
  • Two forms of notation: Barker and IE
  • Many options for displaying sub/super types (D2k fans rejoice!)
  • Glossary - a predefined set of names. These can be used to enforce naming conventions for entities, tables and relations.
  • Schema comparison with DDL change generation
Also of note, in addition to Oracle schemas can be imported from SQL Server, DB2, or any ODBC connected database.

The repository can be either file based, or database based.
There are two versions of the tool, a plugin to SQL Developer, and a stand alone version. The stand alone version will use only the file based repository.

Now for the bad news.

The release date has not been established. The only release information given was 'sometime in the 2009 calendar year'. As the database repository has not yet been designed, the long time to release is understandable.

And finally, licensing has not been established. It might be free, it might not. If not, at least we can hope for reasonably priced. Personally I thinking having a decent data modeling tool that comes free of charge with SQL Developer would contribute to higher quality databases, as more people would use a real database designer rather than a drawing tool.

There was probably more that didn't make it into my notes.
Suffice it to say this is a great development for data modelers and database designers.

Following a few screen shots taken during the presentation.





Categories: DBA Blogs

A lazy log writer

Claudia Zeiler - Fri, 2008-09-26 00:11
I've been laughing because I live 1 block from Moscone Center. It was closer for me to walk between the conference and my house than it was to walk between some of the sessions. Today, I saw the other side of that coin. I got ordered back to work and missed the last day of the conference, (and Chen Shapira's presentation!!!)

What was going on at work? Not much - as in not what should have been. I like looking at the performance monitor on Enterprise Manager for a quick glance at what is going on. It wasn't a pretty picture.



Clicking on the 'Blocking Sessions' tab I saw that that the log writer session was blocking various other sessions.

I went into the alert log and was pointed to a log writer trace file. Inside the trace file I found

*** 2008-09-25 15:28:24.239
Warning: log write time 15460ms, size 6999KB

*** 2008-09-25 15:28:24.836
Warning: log write time 590ms, size 6898KB

*** 2008-09-25 15:28:29.852
Warning: log write time 5020ms, size 6999KB

I looked at metalink and got

" The above warning messages has been introduced in 10.2.0.4 patchset. This warning message will be generated only if the log write time is more than 500 ms and it will be written to the lgwr trace file .

"These messages are very much expected in 10.2.0.4 database in case the log write is more than 500 ms. This is a warning which means that the write process is not as fast as it intented to be . So probably you need to check if the disk is slow or not or for any potential OS causes. "

We just upgraded to 10.2.0.4. Our storage is across the great divide at another company. We are often short of answers other than, "Everything is configured correctly". With quite a bit of work we have gotten a pair of LUNS allocated for redo logs.

As a test, I moved the redo logs from the SAN to a NSF drive - NOT one that should be allocated to redo. Here was the immediate result:



The log writer waits stopped. Compliments from management. A request from management to storage management to move the entire database to this kind of storage, everyone is happy

almost.

Tonight I looked at the trace file:

*** 2008-09-25 22:53:34.154
Warning: log write time 750ms, size 0KB
*** 2008-09-25 22:53:35.943
Warning: log write time 1770ms, size 28KB
*** 2008-09-25 22:53:39.889
Warning: log write time 940ms, size 0KB

Log writer is taking forever, and it isn't even doing anything!

To be continued..... and detective suggestions welcome!








OOW2008 day 5 – It’s A Wrap

Pawel Barut - Thu, 2008-09-25 23:52
Written by Paweł Barut
Before I start to summarize my Day 5 at Oracle OpenWorld, I would like to add few words to day 4 (Wednesday).

Managing Very, Very Large XML Documents with Oracle XML Database
It was very good session. It one of those where practical experience was shared. Presenters showed step by step how to load very large XML files to DB:
  • Setting up XML schema
  • Schema annotation technique and few directives
  • Direct Insert Store for XML
  • Differences with loading XML into XML DB in 10.2g and 11g


The Appreciation Event.
It was very nice concert on Treasure Island. I’ve especially liked Seal. Beside that there was lot of good food and drinks.

Day 5

I will start with session Oracle’s New Database Accelerator: Query Processing Revolutionized. As I’ve expected it was related to announcement made yesterday. My yesterday’s description was not perfect. Now I will try to fix this. First of all we have 2 new machines. But one them is included in the second one.
Oracle Exodata Storage Server - it is hardware from HP: 2 Intel quad-core processors, 12 disks (300GB, 15RPM or 750 GB, 10 RPM) with disk controller optimized for best bandwidth and 2 InfiniBand connectors to connect to external equipment. The code for this product is HP DL180G5 (at least that was on one of slides). This computer is sold with preinstalled Oracle Enterprise Linux 5.1. The main role of this machine is to store database files. It cannot be used to store normal files.
The second hardware is HP-Oracle Exodata DB Machine - This one is rack box equipped with 14 Oracle Exodata Storage Servers and 8 DB Servers each with 8 Intel processors. On those DB Server runs Oracle Enterprise Linux 5.1 and Oracle RDBMS 11g (11.1.0.7). Even more – 6 such DB Machines can be connected into cluster.
Where is the revolution? In the way Oracle DB communicates with storage. There is new protocol iDB that allows to push query predicated down to storage. With this, number of data transferred from Storage to DB Server is minimized. This feature is called Smart Scan. It can be leveraged only when full table (or partition) scans occurs. And it still keeps all read consistency.
And here is technical spec from Oracle.

And short on my other sessions: Oracle Database Performance on Flash Drives
Very interesting session showing results of different approaches for using Flash drives. As an conclusion there was presented formula, when Flash drives can help with performance, and when it is better to stay with fast rotating drives, and when even with low cost but high capacity drives. As an side note when we consider power usage, Flash drives can be even more economic then traditional rotating drives.

Oracle ACE Director: Birds-of-a-Feather Tips and Techniques
Session lead by Oracle ACE’s: Lewis Cunningan, Arup Nunda, Edie Awad, Mark Rittman, Tim Hall, Hans Forbich and Bradley Brown. The session was Q&A style. ACE’s were answering questions based on own experience, and that is sometimes different that Oracle’s official recommendation.

Real-World XML DB Examples from Oracle Support
This was rather chaotic session, and did not gave me useful information- IHMO waste of time.

And the last session by Tom Kyte Reorganizing Objects
Tom have done great job showing different myths about when DBA’s should reorganize tables and indexes. It was really Great speech. Might be there was too much material for an 1h session, and everything was shown little bit in an hurry.

The day has ended with small party It’s A WRAP
While writing this I’m sitting in hotel and watching Fireworks over the San Francisco Port. Tomorrow I’m leaving San Francisco for 18h trip back to home.

Cheers,
Paweł
--
Related Articles on Paweł Barut blog:
Categories: Development

OpenWorld unconference presentation about Rails on Oracle

Raimonds Simanovskis - Thu, 2008-09-25 16:00

On last day of Oracle OpenWorld I did my unconference presentation – Using Ruby on Rails with legacy Oracle databases.

As I did not know if anyone will come to listen to it I was glad that six people attended (including Kuassi Mensah from Oracle who is helping to promote Ruby support inside Oracle). And on the previous day I also managed to show parts of my presentation to Rich and Anthony from Oracle AppsLab team.

I published my slides on Slideshare:

And I published my demo project on GitHub:

hr_schema_demo.png

Thanks to all Oracle people who recognize my work on Ruby and Oracle integration and I hope that our common activities will increase number of Ruby and Rails projects on Oracle :)

Categories: Development

Oracle Open World 2008 Podcast

Mark A. Williams - Thu, 2008-09-25 14:45
I've never really been a prolific blogger and the "interruption" of OOW 2008 has definitely impacted this. However, my podcast with Tom Haunert of Oracle Magazine is now available at the following location:

Oracle Magazine Feature Casts

The title of the podcast is:

Origins of a .NET Developer

Exadata - has it been in development for two years?

Nigel Thomas - Thu, 2008-09-25 14:09
Just my nit-picking mind, but why does the Exadata technical white paper say (at the time of writing at least) that it is "Copyright © 2006, Oracle"? I don't think they've been working on it that long - much more likely some soon-to-be-embarrassed technical writer has cut and paste the standard boilerplate from an out of date source.

What, no rule-driven content management?

Exadata and the Database Machine - the Oracle "kilopod"

Nigel Thomas - Thu, 2008-09-25 14:09
There has already been plenty of interesting posts about Oracle Exadata - notably of course from Kevin Closson here and here (update: and this analysis from Christo Kutrovsky)- but I just have one thing to say.

Larry Ellison was quoted in a number of reports saying the Oracle Database Machine "is 1,400 times larger than Apple’s largest iPod".

Larry, when you want to get over that something is big - really big that is, industrial scale even - just don't compare it with a (however wonderful) consumer toy. Not even with 1,400 of them. 1.4 kilopods is so not a useful measure.

By the way, can I trademark the word kilopod please? (presumably not - a quick google found a 2005 article using the same word, and there is some kind of science-in-society blog at kilopod.com).

OOW2008 day 4 – HP-Oracle Exadata Server Announcement = Extreme Performance

Pawel Barut - Wed, 2008-09-24 18:25
Written by Paweł Barut
Today is very busy day for me. It’s just after Larry Ellison keynote on which first Oracle Hardware was presented. I’m a little big skeptic if Oracle is realy going for hardware business. It is rather that Oracle had great idea how to solve problems with really big databases and growing demand for storage, and joined forces with HP to create new quality in data processing. So, how it looks: in one box we have 2 intel 4 core processors and 12 disks as storage – it is called The Exadata Programmable Storage Server. This machine is not storage, and is not pure DB Server as well. It can process queries (so it is DB), and it stores Data (so it is Storage). But it needs separate DB Server to work at full performance. How it works: DB Servers receives request for data. It then retries data from Exadata Storage Server, but data get initially filtered, so number of data transferred from Storage Server to DB Server is reduced. It allows much better overall performance.
At least this is my understanding. I will go also to DemoGround to get direct look at this machine, and more detailed specification.

Today I’ve also participated in few sessions.
Soup-to-Nuts RAD Development Using Oracle SQL Developer and Oracle Application Express It was quick show how to create simple application using APEX and SQL Developer. It focused on modeling capabilities of SQL Dev, and integration between APEX and SQL Dev. It was shown, how to view APEX objects in SQL Dev, and how to leverage this integration. There was also presentation of new functionality in APEX to migrate Oracle Forms to APEX.

Agile Database Testing Techniques (IOUG) This was very interesting session giving practical inside on how to organize unit tests in DB, how to validate that upgrade scripts run successfully, and how to prepare DB environment for Daily Builds. Presenter shared his real world experience and this was the biggest value of this session.

SQL Tuning Roundtable with the Experts This one was rather boring, as topics and answers were almost exactly the same as on session “Inside Oracle Database 11g Optimizer: Removing the Mystery” that I was participating yesterday.

Now I’m sitting in OCP Louge, and in few minutes I’m going for last session Managing Very, Very Large XML Documents with Oracle XML Database and then for The Appreciation Event.

Cheers,
Paweł

BTW. This is my post # 100.
--
Related Articles on Paweł Barut blog:
Categories: Development

My OpenWorld 2008: 23 Set

Oracle Apex Notebook - Wed, 2008-09-24 14:27
The first session of the day was schedule to 11:30 AM. I had some free time that I used to get around and see what was happening. I went to Moscone North and after blogging a bit on the couches, went to the Unconference section and then the bookstore. I couldn’t find John Scott’s Pro Oracle APEX book, but later at night he told me that the book was there, so maybe I’ll go to the bookstore
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator