Feed aggregator

Read/Write NTFS on my MacBook Pro

Bas Klaassen - Thu, 2010-08-12 03:41
Today I tried to start my virtual linux machines (created in vmware workstation on windows to install/upgrade eBS environments) on my MacBook Pro. I downloaded a trial version of Vmware Fusion. When trying to start a virtual machine, Vmware would show me the following error 'Read only file system' and the machine would not start. It seemed I could not write to the folders containing the Vmware Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com3
Categories: APPS Blogs

FIRST_ROWS vs ALL_ROWS

Robert Vollman - Wed, 2010-08-11 14:35
A colleague asked me some questions about FIRST_ROWS and ALL_ROWS, but I'm hesitant to blog about it because it's already been done so well by others -- the best example would probably be Sachin Arora.Nevertheless, it never hurts to lend another voice to the Oracle choir, so here's everything I know on the topic.FIRST_ROWS and ALL_ROWS are values for the optimizer setting OPTIMIZER_MODE. You canRobert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com197

Discoverer OLAP is certified with OLAP 11g

Keith Laker - Tue, 2010-08-10 06:02
A few people have asked me recently when an updated version of Discoverer OLAP will be released that supports the 11g OLAP Option. The answer is simple - it has already been released!! (but I guess that many people missed it because it was bundled as part of a broader patchset and not widely announced)



If you are interested, you can download it from OTN under Portal, Forms, Reports and Discoverer (11.1.1.3.0)

An updated version of the BI Spreadsheet add-in has been released too and can also be downloaded from OTN


Categories: BI & Warehousing

Fame at last for my biggest Apex project to date

Tony Andrews - Mon, 2010-08-09 11:32
I'm very pleased to see that the Apex project I started and worked on for several years is now the subject of an entry under Customer Quotes on the OTN Apex page."At Northgate Revenues & Benefits, we have used APEX to replace our legacy Oracle Forms system comprising around 1500 Forms. Our user interface has 10,000 end users daily, across 172 clients, who this year sent out over 12 million Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com5http://tonyandrews.blogspot.com/2010/08/fame-at-last-for-my-biggest-apex.html

Moving blog from wordpress.com to Jekyll

Raimonds Simanovskis - Sun, 2010-08-08 16:00
Jekyll
Why to move?

This blog was hosted for several years on wordpress.com as it was the easiest way to host a blog when I started. But recently I was not very satisfied with it because of the following reasons:

  • I include code snippets in my blog posts quite often and several times I had issues with code formatting on wordpress.com. I used MarsEdit to upload blog posts but when I read previous posts back then quite often my < and > symbols were replaced with &lt; and &gt;.
  • I would prefer to write my posts in Textile and not in plain HTML (I think it could be possible also with wordpress.com but it was not obvious to me).
  • I didn’t quite like CSS design of my site and wanted to improve it but I prefer minimalistic CSS stylesheets and didn’t want to learn how to do design CSS specific for Wordpress sites.
  • Wordpress site was too mainstream, I wanted something more geeky :)

When I do web app development then I use TextMate for HTML / CSS and Ruby editing (sometime I use CSSEdit when I need to do more CSS editing), I use Textile for wiki-style content editing in my apps, I use git for version control, I use Ruby rake for build and deployment tasks. Wouldn’t it be great if I could use the same toolset for writing my blog?

What is Jekyll?

I had heard about Jekyll blogging tool several times and now I decided that it is the time to start to use it. Jekyll was exactly matching my needs:

  • You can write blog posts in Textile (or in Markdown)
  • You can design HTML templates and CSS stylesheets as you want and use Liquid to embed dynamic content
  • You can store all blog content in git repository (or in any other version control system that you like)
  • And finally you use jekyll Ruby gem to generate static HTML files that can be hosted anywhere

So it sounds quite easy and cool therefore I started migration.

Migration Initial setup

I started my new blog repository using canonical example site from Jekyll’s creator. You just need to remove posts from _posts directory and start to create your own.

Export from wordpress.com

At first I needed to export all my existing posts from wordpress.com. I found helpful script which processes wordpress.com export and creates Textile source files for Jekyll as well as comments import file for Disqus (more about that later). It did quite good job but I needed anyway to go manually through all posts to do the following changes:

  • I needed to manually change HTML source for lists to Textile formatted lists (export file conversion script converted just headings to Textile formatting) as otherwise they were not looking good when parsed by Textile formatting.
  • I needed to wrap all code snippets with Jekyll code highlighting tags (which uses Pygments tool to generate HTML) – as previously I had not used consistent formatting style I could not do that by global search & replace.
  • I needed to download all uploaded images from wordpress.com and put them in images directory.
CSS design

As I wanted to create more simple and maintainable CSS stylesheets I didn’t just copy previous CSS files but manually picked just the parts I needed. And now as I had full control over CSS I spent a lot of time improving my previous design (font sizes, margins, paddings etc.) – but now at least I am more satisfied with it :)

Tags

As all final generated pages are static there is no standard way how to do typical dynamic pages like list of posts with selected tag. But the good thing is that I can create rake tasks that can re-generate all dynamic pages as static pages whenever I do some changes to original posts. I found some examples that I used to create my rake tasks for tag pages and tag cloud generation.

Related pages

Previously wordpress.com was showing some automatically generated related posts for each post. Initially it was not quite obvious how to do it (as site.related_posts was always showing the latest posts). Then I found that I need to turn on lsi option and in addition install GSL library (I installed it with homebrew) and RubyGSL (as otherwise related posts generation was very slow).

Comments

The next issue is that in static HTML site you cannot store comments and you need to use some hosted commenting system. The most frequently commenting system in Jekyll sites is Disqus and therefore I decided to use it as well. It took some time to understand how it works but it provides all necessary HTML snippets that you need to include in your layout templates and then it just works.

Previously mentioned script also included possibility to import my existing comments from wordpress.com into Disqus. But that was not quite as easy as I hoped:

  • Disqus API that allows to add comments to existing post that is found by URL is not creating new discussion threads if they do not exist. Therefore I needed at first to open all existing pages to create corresponding Disqus discussion threads.
  • As in static HTML case I do not have any post identifiers that could be used as discussion thread identifiers I need to ensure that my new URLs of blog posts are exactly the same as the old ones (in my case I needed to add / at the end of URLs as URL without ending / will be considered as different URL by Disqus).
  • There was issue that some comments in export file had wrong date in URL (it was in cases when draft of post was prepared earlier than post was published) and I needed to fix that in export file.

So be prepared that you will need to import and then delete imported comments several times :)

RSS / Atom feeds

If you have existing subscribers to your RSS or Atom feed then you either need to use the same URL for new feed as well or to redirect it to the new feed URL. In my case I created new Feedburner feed and redirected old feed URL to the new one in .htaccess file.

Other URL mappings

In my case I renamed categories to tags in my blog posts and URLs but as these old category URLs were indexed by Google and were showing on to Google search results I redirected them as well in .htaccess file.

Search

If you want to allow search in your blog then the easiest way is just to add Google search box with sitesearch parameter.

Analytics

Previously I used standard wordpress.com analytics pages to review statistics, now I added Google Analytics for that purpose.

Deployment

Finally after all migration tasks I was ready to deploy my blog into production. As I had account at Dreamhost I decided that it is good enough for static HTML hosting.

I created rake tasks for deployment that use rsync for file transfer and now I can just do rake deploy to generate the latest version of site and transfer it to hosting server.

After that I needed to remap DNS name of blog.rayapps.com to new location and wait for several hours until this change propogated over Internet.

Additional HTML generation speed improvements

When I was doing regular HTML re-generation using jekyll I noticed that it started to get quite slow. After investigation I found out that the majority of time went on Pygments execution for code highlighting. To fix this issue I found jekyll patches that implemented Pygments results caching and I added it as ‘monkey patch’ to my repository (it stores cached results in _cache directory). After this patch my HTML re-generation happens instantly.

My blog repository

I published ‘source code’ of my blog on GitHub so you can use it as example if I convinced you to migrate to Jekyll as well :)

The whole process took several days but now I am happy with my new “geek blogging platform” and can recommend it to others as well.

Categories: Development

Agent blocked....

Bas Klaassen - Fri, 2010-08-06 04:29
In our 11g Grid Control I noticed an agent that was not uploading any data to the oms anymore.When checking the status of the agent I noticed the following:Last successful upload : (none)Last attempted upload : (none)Total Megabytes of XML files uploaded so far : 0.00Number of XML files pending upload : 199Size of XML files pending Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com4
Categories: APPS Blogs

APEX 4.0 Enhancements: Validating Form Data

Anthony Rayner - Thu, 2010-08-05 07:48
Oracle Application Express 4.0 introduces lots of big new features; websheets, dynamic actions, plug-ins, RESTful web services, team development, updated charts, the list goes on. But there are also many enhancements to existing functionality that we hope will help to simplify the overall process of developing applications in APEX. One such area, and the focus of this post is how data is validated. This post will give you an overview of what's changed with validations and how these changes will make your daily development life a little easier.


Item-Centric Validation
Historically in APEX if you want to validate data input on a page, you create a validation. The validation is a separate component that you need to define and maintain. Now in APEX 4.0, the actual item can handle some simple validation of the data it receives. For example, all item's (both native to APEX and plug-ins) now have a 'Value Required' attribute. By setting this to 'Yes', APEX will automatically validate a value has been entered and raise an error if not, no separate validation required.

Additional to this 'Value Required' validation, some item types also validate their data, based on how the item is defined. For example, the new 'Number' item type, which you can use for handling numeric data contains settings for 'Minimum Value' and 'Maximum Value'. When these are defined, APEX will automatically validate the data received based on these settings and raise appropriate errors.

Settings for the new 'Number' item, settings highlighting are automatically validated

The new datepicker item in APEX 4.0 also supports this type of automatic validation. Additional to the 'Value Required' setting (available for all items), the datepicker also allows setting 'Format Mask', 'Minimum Date', 'Maximum Date' and 'Year Range'. In doing so, APEX will again automatically validate the data received based on these settings and raise appropriate errors.

Settings for the new 'Date' item, settings highlighting are automatically validated

Plug-in items may also contain automatic validations, depending on whether the plug-in author has coded in this support.

You also have a couple of ways of customising the error message that is displayed by APEX when these validations fail, in terms of content and position. To override the default error messages, please refer to this section of the user guide. This details all of the 'Text Messages' that you would need to define in your applications to override the defaults. If you want to change where the error is displayed on screen, please see the 'Default Error Display Location' attribute available via 'Edit Application Properties' on the application homepage.

Finally, debug mode has also been enhanced to show when these validations are executed and whether they passed or failed.

Item based validations offer a more logical approach to validating data and requires fewer moving parts, which means less to define and less to maintain. Of course, for other more complex situations a separate validation will still be required, but this greatly simplifies some of the more common, simple scenarios.


Button-Centric Validation Exclusion
Again, historically in APEX if you don't want a validation to fire when certain buttons are pressed, you would define that logic in the validation. Let's take an example. If you have a typical 'Form' page used for inserting, updating and deleting data, you may want your validations to fire for insert and update, but not for delete. This would involve going through each validation and setting some condition such as where REQUEST != 'DELETE' or similar, to prevent the validation from firing. Now, in APEX 4.0, the actual button can be defined to either 'Execute Validations' or not.

Specify that pressing the button should not cause any validations to fire.

This is much easier, all you need to do is set this at button level and that's it, no item, plug-in or custom validations will fire. Wizard created forms will set this up for you automatically, so when creating these types of forms, the 'Create' and 'Save' buttons execute validations and the 'Delete' and 'Cancel' buttons do not.

You can also override this at validation level by setting the 'Always Execute' validation attribute to 'Yes' (defaults to 'No'). This could be useful for example if you want to always execute a security check, regardless of any button exclusions.

Debug mode has again been enhanced here to show if validations are prevented from firing because of the button setting.


Tabular Form Validations
APEX 4.0 now also supports declarative validation of tabular form data. Before APEX 4.0, there was no declarative support for validating this type of form and you would have to do a lot of manual PL/SQL to validate your data. Currently, tabular form validations only support a subset of what's available with page item validation, but do cater for some of the more common scenarios (required values, type checks and string comparisons). We are looking to extend this in a future release of APEX.


Error Message Label Placeholders
This is small but one of my favourites. When defining an error message that displays when a validation fails, if the validation is associated with a specific page item, you can now use the #LABEL# placeholder to dynamically reference the associated item's label.

Use #LABEL# instead of hard-coding the associated item label text.

So instead of having to duplicate the label text in the error message (and have to remember to change it if you change the item's label) as was historically the case, just use the #LABEL# placeholder and that's it. Again, less to define and less to maintain. An equivalent placeholder is also available for the new tabular form validations, #COLUMN_HEADER#.


Upgrading Applications
So finally, what about your existing APEX applications that have been long since built, where you want to take advantage of some of these new features. Well, take a look at the 'Upgrade Application' feature available via the 'Utilities' menu from the application homepage. This assists you in upgrading your application to use some of the new features in APEX 4.0.

Of particular relevance to validations are the following upgrade types:
  • Update Text Field Item to Number Field Item, where appropriate - Locates where you have an unconditional 'Is Numeric' validations on 'Text Field' items and upgrades them to use the 'Number' item type with in-built numeric checking. Also removes the now redundant separate validation.
  • Update Value Required item attribute to Yes, where appropriate - Locates where you have unconditional 'Not Null' validations on items and sets those item's 'Value Required' attribute to 'Yes'. Also removes the now redundant separate validation.
  • Numeric, Required and Date Picker Item updates based upon conditional validations - Just locates where you have conditional validations for 'Is Numeric', 'Not Null' or 'Is Valid Date' on 'Text Field' items, for your manual review. So you can determine if the validation can be replaced with some item settings and button exclusions.


So quite a few little enhancements that hopefully add up to easier and more intuitive data validation. Good luck with your new APEX 4.0 style validations and let us know what you think!

Many thanks to Patrick Wolf for reviewing this post and filling in the gaps.
Categories: Development

One Million

Robert Vollman - Wed, 2010-08-04 16:51
Today, August 4th, shortly after lunch, ThinkOracle had it's one millionth visitor. Care for a stroll down memory lane?I started this site May 16, 2005, shortly after starting a new position with a company that made financial software. The idea was to make my own contribution to the growing Oracle community, expand my knowledge, improve my technical writing, and it never hurts to establish a Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com4

Note to Newbies: Know when it's time to call Oracle Support

Lisa Dobson - Wed, 2010-08-04 07:50
This thread on the OTN forums caught my eye today.Whilst I love the forums and think they are a great place to go for help and advice, there are times when they are not the best medium for support and this was one of them.Forum questions are answered by volunteers, people who have day jobs to do themselves, that are happy to share their experience and help out others with queries. That’s why you Lisahttp://www.blogger.com/profile/16434297444320005874noreply@blogger.com7

What will Oracle buy next?

Lisa Dobson - Tue, 2010-08-03 14:26
Stephen Jannise of ERP Software Advice has written an interesting article around likely targets for Oracle's next acquisition.The article is well written and well thought out, being based on research into the past 5 years of Oracle acquisitions alongside a study into the current market. There's also a diagram to show all of the companies that Oracle has acquired since the PeopleSoft acquisition Lisahttp://www.blogger.com/profile/16434297444320005874noreply@blogger.com2

APEX 4.0 - Do you want to know more?

Anthony Rayner - Tue, 2010-08-03 07:32
Do you want to ask the Vice President of Database Tools at Oracle and original developer of Application Express a question about Oracle Application Express 4.0?

Mike Hichwa is going to be interviewed by Oracle Profit Magazine on APEX, so if you have something you want to ask, questions are being collected for consideration via Twitter. Tweet your questions to @OracleProfit, with the hash tag #askprofit. Selected submissions will receive a 1GB flash drive — and be printed in the November issue of Profit Magazine.
Categories: Development

Changing User's Default Schema

Robert Vollman - Mon, 2010-08-02 20:30
Last week I got a question about changing a user's default schema.My colleague is supporting a typical database application which is configured to use the user/schema that was created for its database. Many queries were written for this application that use that schema owner, but my colleague would like to run those queries with his own account instead - either because he doesn't want to log in Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com2

We have moved!

Oracle Optimizer Team - Sat, 2010-07-31 12:57
You might have been wondering why things had gone so quiet on the Optimizer development team's blog Optimizer Magic over the last few months. Well the blog has moved to blogs.oracle.com/optimizer. All of the old articles have moved too and we plan to be a lot more active at our new home, with at least one new post every month.


We have moved!

Inside the Oracle Optimizer - Sat, 2010-07-31 12:57
You might have been wondering why things had gone so quiet on the Optimizer development team's blog Optimizer Magic over the last few months. Well the blog has moved to blogs.oracle.com/optimizer. All of the old articles have moved too and we plan to be a lot more active at our new home, with at least one new post every month.


Categories: DBA Blogs, Development

Dell and HP to Certify and Resell Oracle VM, Oracle Enterprise Linux, and Oracle Solaris

Sergio's Blog - Fri, 2010-07-30 02:42
Those of you that follow us on twitter/ORCL_Linux probably already saw this.   HP and Dell yesterday announced that they'll be certifying and reselling Oracle VM, Oracle Enterprise Linux and Oracle Solaris on their x86 servers.
Categories: DBA Blogs

Quick script to maintain a diary

Vattekkat Babu - Fri, 2010-07-30 01:30

I like to keep my daily notes in a folder in the filesystem with filenames yyyymmdd.otl, using VIM Outliner. Here is a small DOS script to make a file for a day if it doesn't exist and then open it. Name it as diary.cmd and keep in your path.

RDBMS events

Fairlie Rego - Thu, 2010-07-29 23:36
RDBMS events are often used to do additional tracing and for debug purposes.
Most of them are listed in $ORACLE_HOME/rdbms/mesg/oraus.msg
One such event I use quite often to determine which locks/enqueues a session is requesting is the following.
For example the below trace indicates that an innocuous looking query on v$flash_recovery_area_usage takes a controlfile lock in mode 4 which might not be the best thing to happen on a high throughput multi node RAC environment with a huge number of flashback logs.
SQL> alter session set events '10704 trace name context forever, level 10';

Session altered.

SQL> oradebug setmypid
Statement processed.

SQL> oradebug
tracefile_name /u01/app/oracle/diag/rdbms/TEST/TEST1/trace/TEST1_ora_600.trc
SQL> select * from v$flash_recovery_area_usage;


*** 2010-07-30 10:07:33.978
ksqgtl *** CF-00000000-00000000 mode=4 flags=0x1a011 timeout=900 ***
ksqgtl: no transaction
ksqgtl: use existing ksusetxn DID
ksqgtl:
ksqlkdid: 0001-0036-00000169

*** 2010-07-30 10:07:33.978
*** ksudidTrace: ksqgtl
ksusesdi: 0001-0036-00000168
ksusetxn: 0001-0036-00000169

*** 2010-07-30 10:07:33.978
ksqcmi: CF,0,0 mode=4 timeout=900
ksqcmi: returns 0
ksqgtl: RETURNS 0

*** 2010-07-30 10:07:33.978
ksqgtl *** CF-00000000-00000004 mode=4 flags=0x10010 timeout=0 ***
ksqgtl: no transaction
ksqgtl: use existing ksusetxn DID
ksqgtl:
ksqlkdid: 0001-0036-00000169



Another event I have used in the past is related to parallel query to determine why PQ slaves do not get spawned
But to my surprise this event does not work anymore in 11.2

SQL> alter session set events '10392 trace name context forever, level 1';
Session altered.

SQL> oradebug setmypid
Statement processed.

SQL> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/TEST/TEST1/trace/TEST1_ora_14748.trc

SQL> select /*+ parallel(a,8) */ count(*) from sys.obj$ a;

COUNT(*)
----------
231692

SQL> !cat /u01/app/oracle/diag/rdbms/TEST/TEST1/trace/TEST1_ora_14748.trc

*** 2010-07-30 14:41:02.547
*** SESSION ID:(316.58074) 2010-07-30 14:41:02.547
*** CLIENT ID:() 2010-07-30 14:41:02.547
*** SERVICE NAME:(SYS$USERS) 2010-07-30 14:41:02.547
*** MODULE NAME:(sqlplus@bart.au (TNS V1-V3)) 2010-07-30 14:41:02.547
*** ACTION NAME:() 2010-07-30 14:41:02.547

Processing Oradebug command 'setmypid'

*** 2010-07-30 14:41:02.547
Oradebug command 'setmypid' console output:

*** 2010-07-30 14:41:08.598
Processing Oradebug command 'tracefile_name'

*** 2010-07-30 14:41:08.598
Oradebug command 'tracefile_name' console output:

The trace does not contain any information

Feedback from Oracle was that “not many people use the px numeric ones and so they removed the code.”…
You can still use the _px_trace underscore parameter to determine why queries are not running in parallel

Oracle scene editor

Neil Jarvis - Mon, 2010-07-26 05:46
Just to show that I do keep this blog up to date I thought I'd announce that the voting for the next Oracle Scene editor will shortly be opening for all UKOUG members. I am standing for this position being the technical editor for the last 3 yrs

CP7 for 10.1.2.3 released

Michael Armstrong-Smith - Mon, 2010-07-26 00:34
Just wanted to let you know that on June 4, 2010, Oracle has released CP7 for 10.1.2.3. You will find it on MetaLink as patch number 9112482. When compared to CP6, 9 bugs have been fixed.

So far this cumulative patch has been released for the following platforms:
  • HP-UX PA-RISC (64-bit)
  • Microsoft Windows 32-bit
  • Linux x86 (works for both 32 bit and 64 bit)
  • Oracle Solaris on SPARC (64-bit)
If you are upgrading to CP6 from any patch level prior to CP4 then JDBC patch patch p4398431_10105_GENERIC.zip for bug 4398431(release 10.1.0.5) needs to be installed before you apply CP5.

This patch needs to be applied to all Oracle Homes, i.e. Infrastructure home as well as all related midtier homes.

Bug 4398431 - HANG WHEN RETRIEVING A CONNECTION FROM THE IMPLICIT CONNECTION CACHE

The following posting has been updated:

CP6 for 10.1.2.3 released

Michael Armstrong-Smith - Mon, 2010-07-26 00:31
Just wanted to let you know that on November 18, 2009, Oracle has released CP6 for 10.1.2.3. You will find it on MetaLink as patch number 8746296:. When compared to CP5, 19 enhancements or bugs have been fixed.

So far this cumulative patch has been released for the following platforms:
  • HP-UX Itanium
  • HP-UX PA-RISC (64-bit)
  • IBM AIX on POWER Systems (64-bit)
  • Microsoft Windows 32-bit
  • Linux x86 (works for both 32 bit and 64 bit)
  • Sun Solaris SPARC (32-bit)
If you are upgrading to CP6 from any patch level prior to CP4 then JDBC patch patch p4398431_10105_GENERIC.zip for bug 4398431(release 10.1.0.5) needs to be installed before you apply CP5.

This patch needs to be applied to all Oracle Homes, i.e. Infrastructure home as well as all related midtier homes.

Bug 4398431 - HANG WHEN RETRIEVING A CONNECTION FROM THE IMPLICIT CONNECTION CACHE

The following posting has been updated:

Pages

Subscribe to Oracle FAQ aggregator