Feed aggregator

Warning concerning Java 7 and E-Business Suite

Michael Armstrong-Smith - Thu, 2012-06-14 13:10

This notification is being posted at the request of Oracle Corporation

All E-Business Suite desktop administrators must disable the JRE Auto-Update for their end users immediately to stop it updating to Java 1.7.

URGENT BULLETIN:  Disable JRE Auto-Update for All E-Business Suite End-Users.
The following link from my fellow blogger Steve Chan explains more:

So why is this required?
If you have Auto-Update enabled, your JRE 1.6 version will be updated to JRE 7.  This may happen as early as July 3, 2012.  This will definitely happen after Sept. 7, 2012, after the release of 1.6.0_35 (6u35).

Oracle Forms is not compatible with JRE 7 yet.  JRE 7 has not been certified with Oracle E-Business Suite yet.  Oracle E-Business Suite functionality based on Forms -- e.g. Financials -- will stop working if you upgrade to JRE 7.

There is also a known issue with WebLogic 10.3.6 and JDK 1.7 and you must use JDK 1.6 for that. I will be posting shortly on this issue.

Seems to me therefore that until further notice, even for Discoverer, but definitely for E-Business Suite that you must stay on Java 1.6 

Been a while

Michael Armstrong-Smith - Thu, 2012-06-14 01:29
Hello everyone, I know it has been a little while since my last posting but I am here, alive and kicking and gearing up to get back into blogging.
In case you are not aware, I am in the process of updating my Discoverer Handbook to the latest 11g version of the product. Watch for more postings very soon.

Roku SDK

Bradley Brown - Thu, 2012-06-14 00:04
It's amazing (and pretty cool) that when you start looking around, most everyone has an SDK that allows you to integrate whatever apps you want to develop into their solution.  Sure, you can develop apps for phones - Android, Apple (iOS), and even BlackBerry phones.  Everyone's effectively  carrying pocket PCs around with them these days.

Let's say you want to develop an app to run on a connected TV.  Well, you can do that!  Yahoo (and Google) basically own that market.  Google running on LG, Sony, etc. and Yahoo on many devices too, such as Vizio and Samsung.  Google's pretty easy to develop an app for since it's powered by Android.  I already had my Android app developed, so all I had to do was add some information to my config file, choose a specific API version and I was done.  That was easy!  So now I have an app that runs on Android devices such as the MANY phones and tablets that it runs on.  Yahoo is a bit more more complicated.  You have to down a virtual machine (and a virtual machine "runner" such as Virtual Box from Oracle) and some other code.  It's not well documented either.  I have a Yahoo TV, so that's my next project.

I also have a Roku box.  Roku provides an SDK (Software Development Kit) AND they provide amazing documentation to help you write your first Roku application.  The first language that I programmed in was BASIC - it just so happens to be VERY similar to Visual BASIC, so for me, the learning curve was pretty easy.

Here's a look at the functionality I created using the Roku SDK.  First off, I published my application into the Roku private channels (public channels are apps reviewed by Roku - private are not).  If you go to https://owner.roku.com/Add/InteliVideo you'll be able to see how you can add my new channel to your Roku box.  There are 3 images below.  The image on the left, shows the link on my website to the above Roku link.  In my website, I determine what device your viewing the site from and displays the player for your device (as discussed below).  The image in the middle shows the results of the above link to Roku and the image to the right is what you'll see if you add my Roku app to your Roku device.


Here's a look at that iFrame - if you're looking at this blog from your iPad or iPhone, you'll see the iOS link (which isn't available in the marketplace yet, but will be soon).  If you're looking at it from an Android device, you'll see the link for the Android app.  If you're looking at it from a PC or desktop, you'll see the Roku link.

Once you install the Roku app, it will show up in your list of Roku apps!  This occurs very similar to how auto updates occur on iOS and Android.  You can arrow to the app and click OK / Select and it will run the InteliVideo app.  Now that you can control everything about the app - from the description showing below the icon to the icon used.



Roku has the ability to generate a unique ID.  So I display this unique ID to the user and tell them to go to the InteliVideo website to link their authentication to the Roku device.  This way I didn't have to build an authentication page for the Roku, which is frankly painful to "type" using your arrow keys.


Once you've linked your Roku device to your InteliVideo account, my application looks up all of the categories that you've purchased or can view and it displays each of these videos in a grid.  You can move around the page to view information about each video.  When you want to select a video, you click OK and you'll see the next page.  In the application, I thought it was pretty cool that I can dynamically look up these videos (via XML), get their images (via HTTP) and display them along with descriptions, ratings, etc.


Once you select a video, I used a custom viewer to start the video.  I display the name of the video on the left side of the page and the instructions are on the right.  This is on top of the InteliVideo logo.  If you press the down button, I display the video in full screen.  These videos are streamed.  All of the buffering is done for the user automatically.  The quality even on my big screen TV is BlueRay level.


As you can see, Roku has a full SDK that allows me to authenticate users, provide only videos they have paid for, allow them to watch them, rewind, pause, etc.  It's amazing how powerful these SDKs are.  I'll talk about the Yahoo TV SDK once I've written my application for my Vizio TV with Yahoo TV in it.

Eventual Consistency Explained

Charles Lamb - Tue, 2012-06-12 14:21

Here's a short paper called De-mystifying "Eventual-Consistency" In Distributed Systems by Ashok Joshi.

Recently, there’s been a lot of talk about the notion of eventual consistency, mostly in the context of NoSQL databases and “Big Data”. This short article explains the notion of consistency, and also how it is relevant for building NoSQL applications.

First Steps in Exploring Social Media Analytics

Donal Daly - Sat, 2012-06-09 15:17
As I talk with customers and colleagues the topic of social media analytics is often featured.  Some customers have already got a strategy defined and are executing to a plan while others are at a more nascent stage but believe in the potential to have a direct and almost immediate connection to their customers.

I'll admit that I am somewhat of a social media novice so it will be a learning experience for me too. I am intrigued by the depth of analytics that may be possible. Only this month did I setup my profile on Facebook and I'm 47! I have been using Twitter more regularly of late, since our Teradata Universe conference and I probably look at it 3 or 4 times a day, depending on what's on my schedule for the day. I am finding some interesting, funny and informative updates each day, as I slowly expand the number of people I follow. It is really a mixture of friends and work related contacts at the moment.  I have been a member of LinkedIn for a number of years and find it a useful resource from a professional perspective.  I am within the first 1% of members who subscribed to this site (I received an email to that effect, that I was within the first 1 million sign ups when the site hit 100 million). Finally I am keen on blogging more frequently when I have something interesting to share (i.e. this! :-) ) I had stopped blogging for about 5 years at one point.  I have also started with Flickr and YouTube as well. I'll be my own guinea pig in some ways as I explore and experiment on possible useful analytics in these social media channels.

However when most people think of Social Media and associated analytics, Facebook and Twitter are often mentioned first.
Focusing on Facebook and Twitter you see two very different levels of information.  Twitter provides only basic, row level data. Facebook provides much more complex, relational data. We'll explore these in more detail in future posts.

Data from social media must be linked in three ways:
·      Within the social media itself
·      Across multiple social media
·      Between social media and the bank’s core data

The most secure forms of linking are to use unique references: email addresses, IP addresses and telephone numbers.  This can be supported by direct access methods (i.e. asking the user for their Twitter name, or persuading them to Like the bank on Facebook from within a known environment).

However, even then the confidence in the link must be evaluated and recorded: this information is user provided and may be wrong in some cases. The notion of a “soft match” should be adopted – we think that this is the same person, but we cannot be sure.

I would like to end this post with a recommendation to read the following white paper by John Lovett from Web Analytics Demystified  Beyond Surface-Level Social Media. Lovett, who has written a book on Social Analytics , lays out a compelling vision for Deeper Social Analytics for companies.  He clearly presents the value for companies to go beyond surface level analytics of likes, followers and friends and challenges you to ask deeper and more important questions. This white paper has been sponsored by Teradata Aster and is available for free from here.

In reading this white paper you will gain an understanding of the term 'Surface-Level Social Media' coined by John and how it is possible to gain competitive advantage even operating at this level. He will outline how Generation-Next Marketing is being powered by Social Analytics backed up with a number of interesting customer examples. He goes on to outline a 7 point strategy to build your deeper social media strategy. Finally John concludes with how unstructured data can yield valuable customer intelligence.

I found it to be very informative and well written and gave me a number of new insights and points to ponder. I would be interested in your thoughts on it too. 

Enjoy!









ODI 11g – Faster Files

Antonio Romero - Thu, 2012-06-07 13:28

Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that;

  1. improves the out of the box experience – just build the mapping and the appropriate KM is used
  2. improves out of the box performance for file to file data movement.

This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems.

So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors.

For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before,

So if you are doing any file to file transformations, check it out!

An Organizational Constraint that Diminishes Software Quality

Cary Millsap - Thu, 2012-06-07 10:19
One of the biggest problems in software performance today occurs when the people who write software are different from the people who are required to solve the performance problems that their software causes. It works like this:
  1. Architects design a system and pass the specification off to the developers.
  2. The developers implement the specs the architects gave them, while the architects move on to design another system.
  3. When the developers are “done” with their phase, they pass the code off to the production operations team. The operators run the system the developers gave them, while the developers move on to write another system.
The process is an assembly line for software: architects specialize in architecture, developers specialize in development, and operators specialize in operating. It sounds like the principle of industrial efficiency taken to its logical conclusion in the software world.


In this waterfall project plan,
architects design systems they never see written,
and developers write systems they never see run.
Sound good? It sounds like how Henry Ford made a lot of money building cars... Isn’t that how they build roads and bridges? So why not?

With software, there’s a horrible problem with this approach. If you’ve ever had to manage a system that was built like this, you know exactly what it is.

The problem is the absence of a feedback loop between actually using the software and building it. It’s a feedback loop that people who design and build software need for their own professional development. Developers who never see their software run don’t learn enough about how to make their software run better. Likewise, architects who never see their systems run have the same problem, only it’s worse, because (1) their involvement is even more abstract, and (2) their feedback loops are even longer.

Who are the performance experts in most Oracle shops these days? Unfortunately, it’s most often the database administrators, not the database developers. It’s the people who operate a system who learn the most about the system’s design and implementation mistakes. That’s unfortunate, because the people who design and write a system have so much more influence over how a system performs than do the people who just operate it.

If you’re an architect or a developer who has never had to support your own software in production, then you’re probably making some of the same mistakes now that you were making five years ago, without even realizing they’re mistakes. On the other hand, if you’re a developer who has to maintain your own software while it’s being operated in production, you’re probably thinking about new ways to make your next software system easier to support.

So, why is software any different than automotive assembly, or roads and bridges? It’s because software design is a process of invention. Almost every time. When is the last time you ever built exactly the same software you built before? No matter how many libraries you’re able to reuse from previous projects, every system you design is different from any system you’ve ever built before. You don’t just stamp out the same stuff you did before.

Software is funny that way, because the cost of copying and distributing it is vanishingly small. When you make great software that everyone in the world needs, you write it once and ship it at practically zero cost to everyone who needs it. Cars and bridges don’t work that way. Mass production and distribution of cars and bridges requires significantly more resources. The thousands of people involved in copying and distributing cars and bridges don’t have to know how to invent or refine cars or bridges to do great work. But with software, since copying and distributing it is so cheap, almost all that’s left is the invention process. And that requires feedback, just like inventing cars and bridges did.

Don’t organize your software project teams so that they’re denied access to this vital feedback loop.

WLST Script changing logfile location

Marc Kelderman - Thu, 2012-05-31 13:06
While I was migrating Forms6i to Forms11g patch set #5, the configuration tool of Forms11g is a bit strict. In the silent install it is not possible to set de locations of the log files. Here is the script that will set new filename locations of all the Managed Servers and Admin Server in the domain. It also set the filename location of all the ODL logging.

fmwlogging.py:
#
# usage:
#
# ${ORACLE_HOME}/common/bin/wlst.sh [domain-name] [admin-server-url] [password]
#

import os
import sys
import traceback
import getopt

loggingEnabled=True
# rotationType="none", "bySize", "byTime"
# logFileSeverity="Trace", "Debug", "Info", "Notice", "Warning"
# rotateLogOnStartup=False, True
rotationType="none"
logFileSeverity="Warning"
rotateLogOnStartup=True

def editMode():
edit()
startEdit()

def editActivate():
save()
activate(block="true")

def updateLog(domain_name, logMB, logType):
print "**** Start updateLog()"

fileName = ""
if logType == "Access":
logMB.setLoggingEnabled(loggingEnabled)
fileName = "/data/logs/" + domain_name + "/" + logMB.getName() + "_access.log"
elif logType == "Server":
fileName = "/data/logs/" + domain_name + "/" + logMB.getName() + ".log"
elif logType == "Datasource":
fileName = "/data/logs/" + domain_name + "/" + logMB.getName() + "_datasource.log"
elif logType == "Domain":
logMB.setLogFileSeverity(logFileSeverity)
fileName = "/data/logs/" + domain_name + "/" + domain_name + ".log"

print "**** " + logType + " " + fileName
logMB.setFileName(fileName)
logMB.setRotationType(rotationType)


logMB.setRotateLogOnStartup(rotateLogOnStartup)

print "**** Finished updateLog()"

def changeLogPath(domain_name):
print "**** Start changeLogPath()"

domainConfig()
editMode()

logMB = getMBean("/Log/" + domain_name)
updateLog(domain_name, logMB, logType="Domain")

editActivate()

servers = cmo.getServers()

editMode()
for server in servers:
serverName = server.getName()

logMB = getMBean("/Servers/" + serverName + "/Log/" + serverName)
updateLog(domain_name, logMB, "Server")

httpLogMB = getMBean("/Servers/" + serverName + "/WebServer/" + serverName + "/WebServerLog/" + serverName)
updateLog(domain_name, httpLogMB, "Access")

DSLogMB = getMBean("/Servers/" + serverName + "/DataSource/" + serverName + "/DataSourceLogFile/" + serverName)
updateLog(domain_name, DSLogMB, "Datasource")

editActivate()
print "**** Finished changeLogPath()"

def usage():
print "Usage"
print "./fmwlogging.py $1"

def parse_input():
print "***** Start parse_input()"

domain_name = sys.argv[1]
admin_server = sys.argv[2]
admin_password = sys.argv[3]

return domain_name, admin_server, admin_password

print "***** Start parse_input()"

# Connectionsettings
def connectToServer(username, password, adminurl):
print "***** Start connectToServer()"

connect(username, password, adminurl)

print "***** Finished connectToServer()"

#Definition to disconnect from a server
def disconnectFromServer():
print "***** Start disconnectFromServer()"

disconnect()

print "***** Finished disconnectFromServer()"
exit()

def changeODLPath(domain_name):
print "***** Start changeODLPath()"

domainConfig()
managedServers=cmo.getServers()

#Get Runtime for our server
for managedServer in managedServers:
sname=managedServer.getName()
path = "/Servers/" + sname
cd(path)

print "***** Changing server: " + sname
lh = listLogHandlers(target=sname)
for l in lh:
lname = l.get("name")
lprops = l.get("properties")
removeprops=[]
for prop in lprops:
if prop.get("name") == "maxFileSize":
removeprops.append("maxFileSize")
elif prop.get("name") == "maxLogSize":
removeprops.append("maxLogSize")

odlfile = "/data/logs/" + domain_name + "/" + sname + "-" + lname + "-diagnostic.log"
configureLogHandler(target=sname, name=lname, path=odlfile,removeProperty=removeprops)

print "***** Finished changeODLPath()"

def main(domain_name, admin_server, admin_password):
print "***** Start main()"

connectToServer("weblogic", admin_password, admin_server)

# do the change the ODL log files on all servers ( Admin, managed)
# changeODLPath(domain_name)

# do the change the standard log files on all servers ( Admin, managed)
changeLogPath(domain_name)

# Calling disconnectFromServer definition with no arguments
disconnectFromServer()

print "***** Finished main()"

try:
print "** start()"

domain_name, admin_server, admin_password = parse_input()
main(domain_name, admin_server, admin_password)

print "** finished()"

except Exception, (e):
print "ERROR: An unexpected error occurred!"
traceback.print_exc()
dumpStack()
print "ERROR: Failed to configure fmw diagnostic logging " + domain_name + "!!"

#EOF


Reference: Weblogic MBean Documentation

Cloud-Based Marketplaces and Services

Bradley Brown - Mon, 2012-05-28 23:41
The cloud is clearly where the world is moving!  Amazon has done an amazing job of offering up cloud-based infrastructure services (i.e. servers by the hour).

There are thousands of DVDs on the market today.  We all know that DVDs are going away.  Everyone is watching movies on their iPads and iPhones now.  You see kids watching movies at restaurants on iPhones.  Roku is your future cable killer.  It allows you to watch online content on your TV.  It's similar to Apple TV in many regards.

My new company, InteliVideo has built a cloud-based platform that helps those companies with libraries of DVDs move into the world of streaming and downloadable videos for any device.  It's a marketplace.  DVD content owners can do everything themselves - my site is entirely self-service.  We collect the money, keep track of who bought what and how long they can watch it, and ultimately deliver the content using our cloud-based platform.

My goal is to make these videos available on every possible platform that my customers' customers might want to watch these videos (formerly DVDs) on - digitally.

This was easy enough to accomplish with iOS, Android, and the Amazon markets.  You can give an application away in any of these markets.  So we developed apps and put them into the markets for free.  iOS is still receiving the finishing touches, but will be available soon.

The Roku marketplace allows you to develop an app and create a private channel too.  Roku's development kit uses a language that's pretty similar to Visual Basic from what I can tell.  You can develop an app and put it into their market for free too.  In the InteliVideo case, the authentication will allow us to restrict which videos you have access to through Roku (and iOS, Android, etc).  JSON and XML services are available to these applications.

Another cloud-based service that we're using is Paypal.  It's great to be able charge what you want and integrate Paypal into your solution.  This is exactly what we did.

Combining cloud solutions and integrating them together allows for the creation of a new cloud-based service in no time!

Strange ORA-14404, or not?

Yasin Baskan - Thu, 2012-05-24 08:01
I was trying to drop a tablespace which I know there were no segments in it. A simple query from dba_segments returns no rows which means there are no segments allocated in this tablespace. But strangely I got this:


SQL> drop tablespace psapsr3old including contents and datafiles;
drop tablespace psapsr3old including contents and datafiles
*
ERROR at line 1:
ORA-14404: partitioned table contains partitions in a different tablespace

How come I cannot drop a tablespace with no segments in it?

Enter deferred segment creation. Things were simpler on 9i or 10g. The database I get this error on is an 11.2.0.2 database. Starting with 11.2.0.1 things may fool you when it comes to segments in the database. 11.2 brought us the feature called "deferred segment creation" which means no segments are created if you do not insert any data into the tables you created. Have a look at the note "11.2 Database New Feature Deferred Segment Creation [Video] (Doc ID 887962.1)" about this feature. It is there to save disk space in case you have lots of tables without data in them. In 11.2.0.1 it was only for non-partitioned heap tables, starting with 11.2.0.2 it is also used for partitioned tables.


Coming back to my problem, even if there were no segments reported in dba_segments there are tables and partitions created in this tablespace without their segments created yet. If we look at the tables and partitions in that tablespace:

SQL> select segment_created,count(*) from dba_tables
  2  where tablespace_name='PSAPSR3OLD'
  3  group by segment_created;


SEG   COUNT(*)
--- ----------
NO       13482


SQL> select segment_created,count(*) from dba_tab_partitions
  2  where tablespace_name='PSAPSR3OLD'
  3  group by segment_created;


SEGM   COUNT(*)
---- ----------
NO         1237

There are thousands of objects in there.

What is the solution then?

Obviously it is to get rid of these objects by moving them to a different tablespace. The standard "alter table move" and "alter table move partition" commands do the job. Then the question becomes; will a move table operation create the segment in the new tablespace? If you are on 11.2.0.1, yes it will, defeating the whole purpose of this feature. If you are on 11.2.0.2 it will not create the segments. This is explained in the note "Bug 8911160 - ALTER TABLE MOVE creates segment with segment creation deferred table (Doc ID 8911160.8)".


After everything is moved you can safely drop the tablespace without this error.


UPDATE: Gokhan Atil made me aware of Randolf Geist's post about the same issue. See that post here.

Upcoming WebCast: Bridging the Gap--SQL and MapReduce for Big Analytics

Donal Daly - Wed, 2012-05-23 09:49


On Tuesday May 29th, Teradata Aster will be hosting a web cast to discuss the Bridging the Gap--SQL and MapReduce for Big Analytics. Expected duration is 60 minutes and will start at 15:00 CET (Paris,Frankfurt) 14:00 UTC (London). You can register for free here.

We had run this seminar earlier in May but at a time which was more convenient for a US audience. The seminar was well attended and we received good feedback from attendees that encouraged us to rerun it again with some minor changes and at a time more convenient for people in Europe.

If you are considering a big data strategy, confused by all the hype that is out there, believe that Map Reduce = Hadoop? or Hive = SQL?, Then this is an ideal event for a business user to get a summary of the key challenges, the sort of solutions that are out there and the novel and innovative approach that Teradata Aster has taken to maximise time to value for companies considering their first Big Data initiatives.

I will be the moderator for the event, and will introduce Rick F. van der Lans, independent analyst and Managing Director of R20/Consultancy, based in the Netherlands. Rick  is an independent analyst, consultant, author and lecturer specializing in Data Warehousing, Business Intelligence, Service Oriented Architectures, and Database Technology. He will be followed by Christopher Hillman from Teradata. Chris, is based in the United Kingdom and recently joined us a Principal Data Scientist. We will have time at the end to address questions from attendees.

During the session we will discuss the following topics:

  • Understanding MapReduce vs SQL, UDF's, and other analytic techniques
  • How SQL developers and business analysts can become "data scientists"
  • Fitting MapReduce into your BI/DW technology stack
  • Making the power of MapReduce available to the larger business community


So come join us on May 29th. It will be an hour of your time well invested. Register for free here.



The Girl With The ANSI Tattoo

Oracle WTF - Wed, 2012-05-23 01:35

I enjoyed the David Fincher remake of The Girl With The Dragon Tattoo more than I thought I would. Rather than a shallow and cynical Hollywood cash-in, it's actually a tense, atmospheric, only slightly voyeuristic crime thriller. My favourite part, though, was when Lisbeth Salander begins to solve a 40 year old murder cold case using SQL.

[girl_tattoo_overshoulder.jpg]

We see her tapping at her laptop as she hacks effortlessly into the Swedish police database, interspersed with green-tinted tracking shots of scrolling text as she types in keywords like 'unsolved' and 'decapitation', though never quite the whole query:

[girl_tattoo1.jpg] [girl_tattoo2.jpg]
[girl_tattoo3-mari-magda.jpg]

Naturally I couldn't help stitching a few screenshots together in Photoshop, and this is what I got:

Immediately moviegoers will notice that this can't be Oracle SQL - obviously the AS keyword is not valid for table aliases. In fact as we pull back for a thrilling query results listing we see the mysql prompt and giveaway use [dbname] connect syntax and over-elaborate box drawing.

[girl_tattoo_results1.jpg]

Notice we can just make out the 'FT' of an ANSI left join to the Keyword table.

Finally we get a full-screen shot of the results listing for Västra Götaland:

[girl_tattoo_results2.jpg]

Here's what we were able to reconstruct in the Oracle WTF Forensics department:

SELECT DISTINCT v.fname, v.lname, i.year, i.location, i.report_file
FROM   Incident AS i
       LEFT JOIN Victim AS v on v.incident_id = i.id
       LEFT JOIN Keyword AS k ON k.incident_id = i.id
WHERE  i.year BETWEEN 1947 AND 1966
AND    i.type = 'HOMICIDE'
AND    v.sex = 'F'
AND    i.status = 'UNSOLVED'
AND    (  k.keyword IN
          ('rape', 'decapitation', 'dismemberment', 'fire', 'altar', 'priest', 'prostitute')
        OR v.fname IN ('Mari', 'Magda')
        OR SUBSTR(v.fname, 1, 1) = 'R' AND SUBSTR(v.lname, 1, 1) = 'L' );

+--------+---------+------+-----------+----------------------------------+
| fname  | lname   | year | location  | report_file                      |
+--------+---------+------+-----------+----------------------------------+
| Anna   | Wedin   | 1956 | Mark      | FULL POLICE REPORT NOT DIGITIZED |
| Linda  | Janson  | 1955 | Mariestad | FULL POLICE REPORT NOT DIGITIZED |
| Simone | Grau    | 1958 | Goteborg  | FULL POLICE REPORT NOT DIGITIZED |
| Lea    | Persson | 1962 | Uddevalla | FULL POLICE REPORT NOT DIGITIZED |
| Kajsa  | Severin | 1962 | Dals-Ed   | FULL POLICE REPORT NOT DIGITIZED |
+--------+---------+------+-----------+----------------------------------+

Shocked moviegoers will have been left wondering why a genius-level hacker would outer-join to the Victims and Keywords tables only to use literal-text filter predicates that defeat the outer joins, and whether MySQL has a LIKE operator.

Introduction to Oracle OLAP Web Presentation Series

Keith Laker - Tue, 2012-05-22 10:08
I've posted a series of three videos introducing Oracle OLAP.  This is a great series for people how are interested in learning about what Oracle OLAP is and what it's used for.  I suggest starting viewing these in order.  Here are the links:

Oracle OLAP Overview:  Part 1 - Architecture
Oracle OLAP Overview:  Part 2 - Key Features
Oracle OLAP Overview:  Part 3 - Use Cases
Categories: BI & Warehousing

How to Merge a Row

Oracle WTF - Sat, 2012-05-19 08:20

The tough challenge that seems to have been faced by this developer was that the ID, name and value passed into the procedure needed to be either applied as an update if the name existed, or else inserted as a new row. You might think you could just use MERGE, or maybe attempt the update, capturing the ID value with a RETURNING clause, then if that found no rows insert a new row using seq_somethings.NEXTVAL for the ID. But wait, that wouldn't be complicated enough, would it?

Here's the table:

create table something
( id               integer  not null constraint pk_something primary key
, name             varchar2(100)
, publicsomething  number   default 0  not null );
Here's what they came up with:
PROCEDURE SaveSomething(pId              IN OUT something.id%TYPE,
                        pName            IN something.name%TYPE,
                        pPublicSomething IN something.publicsomething%TYPE) IS
     counter NUMBER;
BEGIN
     SELECT COUNT(rowid)
     INTO   counter
     FROM   something c
     WHERE  LOWER(c.name) = LOWER(pName);

     IF counter > 0 THEN
          SELECT id
          INTO   pId
          FROM   something c
          WHERE  LOWER(c.name) = LOWER(pName);
     END IF;

     IF (pId IS NOT NULL AND pId > 0) THEN
          UPDATE something
          SET    id              = pId,
                 name            = pName,
                 publicsomething = pPublicsomething
          WHERE  id = pId;

     ELSE
          SELECT seq_somethings.NEXTVAL
          INTO   pId
          FROM   dual;

          INSERT INTO something
               (id, name, publicsomething)
          VALUES
               (pid, pname, ppublicsomething);
     END IF;

EXCEPTION
     WHEN OTHERS THEN
          -- log the details then throw the exception so the calling code can perform its own logging if required.
          log_error('PK_ADMIN.SaveSomething',
                    USER,
                    SQLCODE || ': ' || SQLERRM);
          RAISE;
END SaveSomething;

Thanks Boneist for this. By the way she mentioned she counted 6 WTFs, "some more subtle than others". I'm not sure whether we're counting the stupid redundant brackets around the IF condition (drives me crazy), the novel 5-character indent or the design WTF in which the "name" column is expected to be unique but has no constraint or indeed index. I'm definitely counting SQLCODE || ': ' || SQLERRM though.

Meet The Experts: Matthew Morris

OCP Advisor - Sat, 2012-05-19 07:19
OCP Advisor interviewed Matthew Morris, one of the earliest Oracle Certified Professional (OCP) and now a popular author of OCP study guides. Matthew was among the first hundred to be OCP DBA certified on Oracle 7.3 and has since upgraded his certification to releases 8i, 9i, 10g and 11g. 

OCP Advisor: Hello Matt, on behalf of OCP blog readers we are delighted to have you as our featured Oracle Expert. Please tell us something about your professional experience.

Matthew Morris: In early 1996, I started working for Oracle Support as part of the RDBMS Server Technologies team.  Over the first few months, I read every Oracle Press book I could get my hands on from cover-to-cover! I was the top rated support analyst and was asked to develop and deliver courses for team members.  After four years of DBA support, I changed my focus to PL/SQL and Web development.  I developed database tools and applications to assist managers and analysts in the Oracle Support team.  Currently I work at Computer Sciences Corporation and is responsible for developing custom database applications using PL/SQL and Oracle APEX. 

OCP Advisor: Please tell us about your Oracle Certification path and what motivated you on this path.

Matthew Morris: Two years after I started working for Oracle, Oracle Certified Professional program was launched.  Sylvan created at testing center in our office building. I was interested in demonstrating my knowledge, so I took and passed all four of the OCP DBA tests. I learned that this made me one of the first hundred people to become an Oracle Certified Professional (OCP)! Later that year, Oracle introduced the Oracle Developer Certification and I took the four developer tests in a single day and became one of the first hundred Oracle Certified Developers.  Since then, I have tried to keep my certifications current. 

OCP Advisor: You are a popular author of several Oracle Certification guides. Please let us know how you started writing them.

Matthew Morris: In May 2011, I developed a study guide for 1Z0-050: New Features for Administrators.  Then in late 2011, I began studying for 1Z0-450: Oracle Application Express 4: Developing Web Applications.  For almost every exam, my preparation includes creating a study sheet with key testing points.  For 1Z0-450, this sheet rapidly became much larger and more elaborate than usual.  It seemed reasonable at the time to go the extra mile and make it into a publishable study guide.  Ultimately, I found that doing this did not add an extra mile but rather an extra ten to fifteen miles!  However, once it was published, the result was heartening.  I modified the 1Z0-050 course material I had developed earlier into a second study guide and published it as well.  Since those first two, I have published study guides for the SQL Expert and SQL Fundamentals exams. Currently, I am developing exam guides for the OCP DBA I and DBA II exams.

OCP Advisor: Please share with our blog readers about the content of the Oracle Certification Guides you have published?

Matthew Morris: The Oracle Certification Prep series targets two groups of candidates.  The first group consists of Oracle professionals who want to get certified in a subject they are experienced in and would like a reference guide to help them ‘bridge the gaps’.  The second group are those who are already using another source of information in their study plan but would like to have a second reference.  It is especially valuable for candidates using only Oracle documentation but would like to have an inexpensive reference against which to determine how well they did.  To help both these audiences, the study guides present information about the test topics in a very condensed format.  The intent is to deliver facts that candidates need to know, functions and features they need to recognize, in a compact format enough to revise several times.  

OCP Advisor: What advice do you have for candidates preparing for an Oracle Certification exams?

Matthew Morris: From the posts on the various certification forums I frequent, there are many candidates who are studying just to pass the exams rather than to gain knowledge.  Learning how to pass a test does not prepare someone to perform well at professional level.  Use the opportunity when studying for Oracle certification to make a concerted effort to learn the concepts first.

OCP Advisor: Please share with our blog readers one habit that has contributed most to your professional success?

Matthew Morris: Setting a realistic expectation determines whether you succeed or fail.  I try to set a reasonable time frame and give my very best to deliver ahead of every deadline. 

Much Ado About Nothing?

Rob van Wijk - Sat, 2012-05-19 04:58
I was reading this presentation PDF of Hugh Darwen recently, called How To Handle Missing Information Without Using NULL. Several great thinkers and founders of the relational theory consider NULL as the thing that should not be. For example, one slide in the above mentioned PDF is titled SQL's Nulls Are A Disaster. And I found a paper with the amusing title The Final Null In The Coffin. I can Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com7

Future of Oracle Forms conference, part 4

Gerd Volberg - Sat, 2012-05-19 01:50
Wilfred van der Deijl presented "Partial and or Gradual migration by embedding existing Forms in new UI technology".

Because Oracle didn't offer a migration solution from Oracle Forms to ADF, Wilfred showed a technique to integrate existing forms applications into a new ADF application. He called it OraFormsFaces, a framework, which allows him creating hybrid applications. The non-forms part must not be ADF, so you can use other web technologies too.

This hybrid way can be used to slightly move from forms to another technology, without having Big Bangs.


In the last presentation Steven Davelaar spoke about "JHeadstart: Real world experiences for migrating Forms to ADF"


JHeadstart is a JDeveloper extension for template based working in ADF. Best practices can be used out of the box.

Stevens presentation was the last one from the General Session Part.


After a short break the "Parallel Break-Out sessions" began. In parallel sessions 3 speaker had their own room and 45 minutes to present more information about the own topic.


At 5pm the day ends with the big speaker panel.


Here we all had the chance to discuss with the speaker about forms modernization and their presentations.

The night-sessions started at 7pm. Six parallel hands-on sessions could be visited. The speakers had VMwares for the participants, so that they could test in their own environment.

At 9:30 the conference ended.

All relevant links can be found here

At that point I have to say many thanks to AMIS, all the speakers and the host Lucas Jellema. They did a great job on the conference with 89 participants.

Many thanks
Gerd

Private Cloud vs Public Cloud

Anshu Sharma - Fri, 2012-05-18 13:25

I was at All About the Cloud Summit at San Francisco last week and one of the most popular debate was when ISVs should choose Private Cloud vs Public Cloud for hosting their SaaS Application. These are the most common situations when Private Cloud might be most appropriate for the ISV

 - Significant existing Data Center Infrastructure

- Data can not go to outside Provider (Data Sovereignty Issues)

- Security Requirements can not be met by Public Provider

- Latency requirements can not be met by Public Provider

- Application Architecture does not meet requirements of Public PaaS Providers

In any case, the requirements from both Public and Private Clouds are the same -

- Allow ISV to meet Performance/ Availability SLAs while keeping Operations Cost Low

- Standards based Architecture so that Application/ Customer can move from Public to Private Clouds and vice versa

- Reduced Complexity to allow ISV to concentrate on innovation at Application Layer and not worry about Infrastructure changes

 - Deliver on key Cloud value propositions around Elasticity, Quick Provisioning, Self Service

Two Oracle Partners who have gone Private Cloud route for their SaaS Application were in the news recently.

- IQNavigator won the SIIA CODie Award at the event http://iqnavigator.com/blog/2012/05/iqnavigator-wins-siia-codie-award-for-best-supply-chain-management-solution/

- Emerson Avocent announced the GA of their Data Center Infrastructure Management Application http://www.avocent.com/About/Newsroom/Press_Releases/2012/Emerson_Network_Power_Releases_Trellis_Platform_to_Unify_IT_and_Facilities_Management_for_Improved_Data_Center_Performance_and_TCO.aspx?utm_campaign=AVO%20NA%202012%20Trellis%20R1%20Campaign&utm_medium=email&utm_source=Eloqua

Microgen DBClarity Developer: Introductions

Sue Harper - Fri, 2012-05-18 11:07
Got a minute or two to spare and still not quite sure where DBClarity Developer fits in? Watch the brief introduction video posted on our website: http://www.microgen.com/uk-en/products/microgen-dbclarity-developer.


Future of Oracle Forms conference, part 3

Gerd Volberg - Fri, 2012-05-18 06:00
Mia Urman started with the next presentation: "To infinity and beyond: Extend the life of your Oracle Forms application by running your existing Forms from next generation technologies/platforms without migration"

She demonstrated "next generation forms" with oraplayer. A tool, which recreates a form into a mobile version.

In the first step OraPlayer records a forms business scenario.


Then you publish it to the web. In the last step you create an UI in the technology of your choice.


While you playback the scenario you work and interact with a running forms-runtime in the background.


The presentation of Madi Serban showed the co-existence of Forms, ADF and APEX applications. It started with the question "Should we stay or should we go?" and went to redesigning applications using the Pitss-tool.


The presentation, that came next, was "Yo!Forms" from Oliver Tickell and Don Smith.


Yo!Forms is a tool, that can run forms-fmx in the web without java-plugin in pure html and javascript.

I took a photo of a live-presentation, where I could use the safari-browser of my iPhone to start a forms-runtime and work with the forms like if it were running in a JVM


The betatest of Yo!Forms is starting this year, as I heard.

to be continued
Gerd

Pages

Subscribe to Oracle FAQ aggregator