Feed aggregator

JavaScript stored procedures as Cloud data services.

Kuassi Mensah - Tue, 2015-09-08 17:37
Find out how to implement JavaScript Stored Procedures with Oracle Database 12c and how to invoke these through RESTful Web Services.

https://blogs.oracle.com/java/entry/nashorn_and_stored_procedures

Every Word Counts: Translating the Oracle Applications Cloud User Experience

Usable Apps - Mon, 2015-09-07 07:06
Loic Le Guisquet. Image by Oracle PR.

"Successfully crossing new frontiers in commerce needs people who understand local preferences as well as global drivers. In addition, technology has also been a great enabler of globalization, so the right balance between people and tech is key to success."

- Loïc Le Guisquet, Oracle President for EMEA and APAC

Oracle's worldwide success is due to a winning combination of smart people with local insight and great globalized technology. The Oracle Applications Cloud experience (UX)—that competitive must-have and differentiator—is also a story of global technology and empathy for people everywhere.

UX provides for the cultural dynamics of how people work, the languages they speak, and local conventions and standards on the job. So, how do we deliver global versions of SaaS? Oracle Applications UX Communications and Outreach's Karen Scipi (@karenscipi) explains:

How We Build for Global Users

Oracle Applications Cloud is currently translated into 23 natural languages, besides U.S. English, using a process that ensures translated versions meet the latest user expectations about language, be it terminology, style, or tone.

Oracle HCM Cloud R10 Optimized for Global Working on YouTube

Global Workforce Optimization with Oracle HCM Cloud Release 10: More than 220 countries or jurisdictions supported.

Oracle Applications Cloud is designed for global use and deployment, leveraging Oracle ADF’s built-in internationalization (i18n) and translatability support to make development and translation easy. For example:

  • Translatable text is stored separately (externalized) from the application code for each language version (called a National Language Support [NLS] version).
  • Externalized text is contained in industry-standard XML Localization Interchange File Format (XLIFF)-based resource bundles, enabling not only safe, fast translation but also easy maintenance on a per language basis.
  • Currency, date, time, characters, reading and writing directions, and other local standards and conventions are automatically built in for developers. Oracle ADF uses the industry-standard i18n support of Oracle Java and Unicode.

In addition:

  • Users can enter and display data in their language of choice, independent of the language of the user interface: relying on what we call multilingual support (or MLS) architecture.
  • The software includes global and country-specific localizations that provide functionality for country- and region-specific statutory regulatory requirements, compliance reporting, local data protection rules, business conventions, organizational structure, payroll, and other real-world necessities for doing business with enterprise software.
  • Users can switch the language of their application session through personalization options.
  • NLS versions can be customized and extended in different languages by using Oracle composer tools to align with to align with their business identity and process. Translated versions too rely on the same architecture as the U.S. version for safe customizations and updates.

How We Translate

During development, the U.S. English source text is pseudo-translated using different language characters (such as symbols, Korean and Arabic characters), "padded" to simulate the longer words of other languages, and then tested with international data by product teams. This enables developers to test for translation and internationalization issues (such as any hard-coded strings still in English, or spacing, alignment, and bi-directional rendering issues) before external translation starts.

Hebrew version of Oracle Sales Cloud Release 8

Internationalized from the get-go: Oracle Sales Cloud in Hebrew (Release 8) shows the built-in bi-directional power of Oracle ADF.

For every target language, the Oracle Worldwide Product Translation Group (WPTG) contracts with professional translators in each country to perform the translation work. Importantly, these in-country translators do not perform literal translations of content but use the choice terms, style, and tone that local Oracle WPTG language specialists specify and that our applications users demand in each country or locale.

Mockup of French R10 Oracle Sales Cloud

Mockup of an Oracle Sales Cloud landing page in French. (Image credit: Laurent Adgie, Oracle Senior Sales Consultant)

NLS versions of Oracle Applications Cloud are made available to customers at the same time as the U.S. English version, released as NLS language packs that contain the translated user interface (UI) text, messages, and embedded help for each language. The secret sauce of this ability to make language versions available at the same time is a combination of Oracle technology and smart people too: translation, in fact, begins as soon as the text is created, and not when it's released! 

And, of course, before the NLS versions of Oracle Applications Cloud are released, Oracle language quality and functional testing teams rigorously test them.

The Language of Choice

Imagine an application that will be used in North America, South America, Europe, and Asia. What words should you choose for the UI?

  • The label Last Name or Surname?
  • The label Social Security Number, Social Insurance Number, or National Identification Number?
  • The MM-DD-YYYY, DD-MM-YYYY, or YYYY-MM-DD date format?

The right word choice for a label in one country, region, or protectorate is not necessarily the right word choice in another. Insight and care is needed in that decision. Language is a critical part of UX and, in the Oracle Applications Cloud UX, all the text you see is written by information development professionals, leaving software developers free to concentrate on building the applications productively and consistently using UX design patterns based on Oracle ADF components.

Our focus on language design—choosing accurate words and specialized terms and pairing them with a naturally conversational voice and tone—and providing descriptions and context for translators and customizers alike-also enables easy translation. Translated versions of application user interface pages are ultimately only as accurate, clear, and understandable as their source pages.

In a future blog post we'll explore how PaaS4SaaS partners and developers using the Oracle Applications Cloud Simplified UX Rapid Development Kit can choose words for their simplified UIs that will resonate with the user’s world and optimize the overall experience.

For More Information

For insights into language design and translation considerations for Oracle Applications Cloud and user interfaces in general, see the Oracle Not Lost in Translation blog and Blogos.

Solaris VM Templates for WebLogic Server 12.1.3

Steve Button - Sun, 2015-09-06 19:45
A new set of VM Templates for Solaris have been made available on OTN. 
These templates  provide a quick and easy way to spin up pre-built WebLogic Server 12.1.3 instances using either Solaris Zones or Oracle VM Server for Sparc.

http://www.oracle.com/technetwork/server-storage/solaris11/downloads/solaris-vm-2621499.html


Last Successful login time in 12c

Pakistan's First Oracle Blog - Sun, 2015-09-06 03:24
One cool small yet valuable feature in Oracle 12c is the display of 'Last Successful login time'. If authentication is from the OS level, then it isn't shown. A small demo is as follows:




[oracle@targettest ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.1.0 Production on Sun Sep 6 18:22:00 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@targettest ~]$ sqlplus 'hr/hr'

SQL*Plus: Release 12.1.0.1.0 Production on Sun Sep 6 18:22:07 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Sun Sep 06 2015 18:21:56 +10:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@targettest ~]$
[oracle@targettest ~]$ sqlplus 'hr/hr' as sysbackup

SQL*Plus: Release 12.1.0.1.0 Production on Sun Sep 6 18:22:12 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@targettest ~]$
Categories: DBA Blogs

\d in Vertica

Pakistan's First Oracle Blog - Sat, 2015-09-05 00:13
A quick neat way to list down important and oft-needed information like names of databases, schemas, users, tables, projections etc. We can also use patterns with the '\d' to narrow down the results. Let's see it in action:



Connect with Vertica vsql:

vsql  -U dbadmin -w vtest -h 0.0.0.0 -p 5433 -d vtest

 Welcome to vsql, the Vertica Analytic Database interactive terminal.
Type:  \h or \? for help with vsql commands
\g or terminate with semicolon to execute query
\q to quit
vtest=>
vtest=> \dn

List of schemas
Name     |  Owner  | Comment
--------------+---------+---------
v_internal   | dbadmin |
v_catalog    | dbadmin |
v_monitor    | dbadmin |
public       | dbadmin |
TxtIndex     | dbadmin |
store        | dbadmin |
online_sales | dbadmin |
mytest       | mytest  |
(8 rows)
vtest=> \dn mytest

List of schemas
Name  | Owner  | Comment
--------+--------+---------
mytest | mytest |
(1 row)
vtest=> \dn my*

List of schemas
Name  | Owner  | Comment
--------+--------+---------
mytest | mytest |
(1 row)
vtest=> \dn v

List of schemas
Name | Owner | Comment
------+-------+---------
(0 rows)
vtest=> \dn *v*

List of schemas
Name    |  Owner  | Comment
------------+---------+---------
v_internal | dbadmin |
v_catalog  | dbadmin |
v_monitor  | dbadmin |
(3 rows)

Likewise you can list down other information like :
vtest=> \dj

List of projections
Schema    |            Name             |  Owner  |       Node       | Comment
--------------+-----------------------------+---------+------------------+---------
mytest       | ptest                       | mytest  | v_vtest_node0002 |
mytest       | testtab_super               | mytest  |                  |

To list down views:
vtest=> \dv
No relations found.

If you connect with the mytest user and run:
vtest=> \dt

List of tables
Schema |  Name   | Kind  | Owner  | Comment
--------+---------+-------+--------+---------
mytest | testtab | table | mytest |
(1 row)

Following are more '\d' options from help:
Informational

\d [PATTERN]   describe tables (list tables if no argument is supplied)
PATTERN may include system schema name, e.g. v_catalog.*
\df [PATTERN]  list functions
\dj [PATTERN]  list projections
\dn [PATTERN]  list schemas
\dp [PATTERN]  list table access privileges
\ds [PATTERN]  list sequences
\dS [PATTERN]  list system tables. PATTERN may include system schema name
such as v_catalog, v_monitor, or v_internal.
Example: v_catalog.a*
\dt [PATTERN]  list tables
\dtv [PATTERN] list tables and views
\dT [PATTERN]  list data types
\du [PATTERN]  list users
\dv [PATTERN]  list views
Categories: DBA Blogs

The Importance of Data Virtualization

Kubilay Çilkara - Fri, 2015-09-04 12:37
It’s been a long time since the way data was stored and accessed has been addressed. We went from scrolls to books to mainframes and this last method hasn’t budged all that much over the last decade or so. This is despite the fact that we keep creating more and more information, which means that better ways for storing and finding it would make a world of difference. Fortunately, this is where data virtualization comes into play. If your company isn’t currently using this type of software, you’re missing out on a better way of leveraging your organization’s data.

Problems with Traditional Methods

No matter what line of work you’re in, your company is creating all kinds of information each and every business day. We’re not just talking about copy for your website or advertising materials either. Information is created with each and every transaction and just about any time you interact with a customer.

You’re also not just creating information in these moments. You need to access stored data too. If you don’t do this well—and many, many companies don’t—you’re going to end up with a lot of unnecessary problems. Of course, you’re also wasting a lot of money on all that info you’re creating but not using.

The main problem with traditional methods of data storage and retrieval is that they rely on movement and replication processes and intermediary servers and connectors for integrating via a point-to-point system. This worked for a long time. For a few companies, it may seem like it’s still working to some degree.

However, there’s a high cost to this kind of data movement process. If you’re still trying to shoulder it, chances are your overhead looks a lot worse than competitors that have moved on.
That’s not the only problem worth addressing, though. There’s also the huge growth of data that’s continuing to head north. We’ve touched on this, but the problem is so prevalent that there’s a term for it throughout every industry: Big Data. Obviously, it refers to the fact that there is just so much information out there, but the fact this term exists also shows how much it affects all kinds of companies.

Big Data is also a statement about its creation. Its sheer proliferation is massive. Every year, the amount increases at an exponential rate. There’s just no way the old methods for storing and finding it will work.

Finally, customers expect that you’ll be able to find whatever information you need to meet their demands. Whether it’s actual information they want from you or it’s just information you need to carry out their transaction, most won’t understand why this isn’t possible. After all, they use Google and other search engines every day. If those platforms can find just about anything for free, why can’t your company do the same?

The Solution Is Data Virtualization

If all of the above sounded a bit grim, don’t worry. There’s a very simple solution waiting for you in data virtualization. This type of software overcomes the challenges that come with your data being stored all over your enterprise. You never again have to run a million different searches to collect it all or come up with some halfway decent workaround. Finally, there’s a type of platform made specifically for your company’s data gathering needs.
This software isn’t just effective. It’s convenient too. You work through one, single interface and can have access to the entirety of your company’s data store. What it does is effectively separate the external interface from the implementation going on inside.

How It Works

Every data virtualization platform is going to have its own way of working, so it’s definitely worth researching this before putting your money behind any piece of software. However, the basic idea remains the same across most titles.
With virtualization software, the data doesn’t have to be physically moved because this technology uses meta data to create a virtual perspective of the sources you need to get to. Doing so provides a faster and much more agile way of getting to and combining data from all the different sources available:
·      Distributed
·      Mainframe
·      Cloud
·      Big Data
As you can probably tell, this makes it much easier to get your hands on important information than the way you’re currently doing it.

Finding the Right Title for Your Company

Although they all serve the same purpose and the vast majority will follow the outline we just went through, there are still enough virtualization software titles out there that it’s worth thinking about what your best option will look like. As with any type of software, you don’t want to invest your money anywhere it’s not going to do the most good.
The good news is that this software has been around long enough that there have been best practices established for how it should work.
First, you want to look for a title that will actually transform the way your mainframe works by leveraging what it can do for every search. This is just a good use of your resources and gives you more bang for your buck as far as your mainframe is concerned. Software that uses it for virtualization purposes is going to work even better than a distribution-based application and won’t cost any more.

However, it’s also going to work a lot better for that price too. A lot of companies also love that this way of carrying out a search is more secure as well. The last thing you want is to pay for a piece of software that’s actually going to leave you worse off.

Secondly, although this may sound complex, you can and should find a solution that keeps things simple. Data integration can be straightforward with the method we just described without any need for redundant processes that would slow down your ability to scale up.

With data virtualization, there is no downside. Furthermore, it’s becoming more and more of a necessity. Companies are going to have to invest in this software as Big Data continues to get bigger.



Mike Miranda writes about enterprise software and covers products offered by software companies like Rocket Software.about topics such as Terminal Emulation, Legacy Modernization, Enterprise Search, Big Data and Enterprise Mobility.


Categories: DBA Blogs

Don’t Settle When It Comes to Enterprise Search Platforms

Kubilay Çilkara - Fri, 2015-09-04 12:32
No company should try to operate for very long without an enterprise search platform. With so many options out there, though, you could be confused about which one is worth a spot in your organization’s budget. Let’s look at two very common workarounds some have tried, and then we will talk about why you must go with a reputable developer when you make your final decision.

Leveraging Web Search Engines

One way a lot of companies have handled their need for an enterprise search platform is to simply use one like Google or some other web engine. On paper, this may seem to make a lot of sense. After all, who doesn’t trust Google? Most of us use it every single day and would be lost without the search engine giant. If you can leverage its power for your company’s needs, that would seem like a no-brainer, right?

Unfortunately, if you try this, you’ll learn the hard way that Google leaves a lot to be desired when it comes to this type of an environment. That’s not to say it won’t work; it just won’t work well and definitely not nearly as well as you want for a search engine working your company’s internal systems. Google and other web search engines are fantastic at what they’re designed to do. They are not designed to work in an enterprise search environment though. For that, you need a true enterprise search platform.

The major problem is that web search engines are all about sheer volume, which makes a lot of sense once you think about it. When you want to find a pizza parlor in your city, you want to know about every single option and then some. Google’s ability to harvest this much information and present it quickly is one of the reasons it’s so popular. Any search engine that can’t deliver this kind of volume is going to alienate users and soon be left on the scrap heap.

What web search engines like Google don’t do well, though, is carry out deep, detail-oriented searches. Again, this makes sense. Google is driven largely by keywords and backlinks. These are “surface searches” that weren’t created to do some of the tasks you need an enterprise search platform form.

Your employees may do some very simple searches, but they probably also have some pretty demanding queries too. Enterprise search platforms are designed to handle these types of searches and drill down to the last detail in any file or piece of data they come across. If your employees aren’t able to do these types of searches on a regular basis, the search software you invested in will be a waste of money. Worse, it could land you with all kinds of other problems because your people will think they’re doing the best possible search they can and concluding that what they want can’t be found.

Also, don’t forget that your company stores information in different places. Yes, it may all be on the same server, but in the digital world, there are various “silos” that hold onto information. Each silo is its own environment with its own rules. When you try using a web search engine to look through all your company’s silos, what will most likely happen is that it will have to go through one at a time. This is far from ideal, to say the least.

If you have a good number of silos, your employees will most likely give up. They won’t want to walk the search engine from one silo to the next like they’re holding onto the leash of a bloodhound. The whole point of a search engine is that it’s supposed to cut down on the exhaustive amount of “manual” work you’d otherwise have to do to find the data you need.

Silos aren’t all the same, so you want a search program that can go in and out of the type you have without requiring the employee to reconfigure their query.


Open Source Software


Another very popular method of acquiring enterprise search software is to go with an open source title. Once again, on paper, this seems like a very logical route to take. For one thing, the software is free. You can’t beat that price, especially when it comes to an enterprise-level platform. This usually seems like an unbeatable value for small- and medium-sized businesses that don’t have a lot of leeway where their budget is concerned.

That’s just one benefit that proponents of open source search engines tout though. There’s also the fact that you can modify the software as you see fit. This gives the user the ability to basically mold the code into a result that will fit their company’s needs like a glove.

There are a couple problems though. The first is that you get what you pay for. If your open source software doesn’t deliver, you have no one to complain to. You can always do your research to find a title that others have given positive reviews to. At the end of the day, though, don’t expect much in the way of support. There are plenty of forums to go through to find free advice, but that’s not the type of thing most professionals want to wade through.

When you go with a true, professional version, your employees will never be far from help if something goes wrong. Most companies these days have email and phone lines you can use, but many also have chat boxes you can open up on their website. None of these will be available for your company if you go with open source software.

Also, you can definitely modify the search engine software, but this isn’t necessarily unique to open-source platforms. Professional search platforms can be modified a number of ways, allowing the user to streamline their software and fine-tune the result so they get relevant results again and again.
This type of architecture, known as a pipeline, is becoming more and more the standard in this industry. Enterprise platforms come with all kinds of different search features, but that can also be a problem if they start getting in the way of one another. To ensure there are never too many cooks in the kitchen, pipeline architecture is used to line them all up, one in front of the other. By doing so, you’ll have a much easier time getting the search results you want, especially because you can just reconfigure these features as you see fit whenever you like.


Ongoing Updates Are Yours


One very important aspect of professional enterprise search platforms that is worth pointing out is that most developers are constantly putting out updates for their product. This is the same thing web search engines do, of course. Google, Yahoo and Bing all release upgrades constantly. The difference, however, is that enterprise platforms get upgrades that are specific to their purposes.
While there are updates for open source software, expect sporadic results. The developer of your favourite title could give up and go on to another project, leaving you to look for someone else to continue creating great updates.

If you have a skilled developer who is familiar with open source search engines on your team, this may be an attractive option. Still, most will find this route is just too risky. Most of us also don’t have that kind of developer on staff and it wouldn’t be worth it to hire someone on specifically for this reason (it’d be much more affordable to just buy professional software). Also, remember that, even if you do have this kind of talent within your ranks, you’ll soon become completely beholden to them if you start trusting them with this kind of job. Having someone who is completely responsible for your search engine being able to work and not having someone else on staff who can offer support or replace them is not a good idea.


Scalability Is a Given

Every company understands how important scalability is. This is especially true when it comes to software though. The scalability of a program can really make or break its value. Either it will turn into a costly mistake that greatly holds your business back or it will become the type of agile asset you actually take for granted, it’s so helpful.

Open source platforms are only as scalable as their code allows, so if the person who first made it didn’t have your company’s needs in mind, you’ll be in trouble. Even if they did, you could run into a problem where you find out that scaling up actually reveals some issues you hadn’t encountered before. This is the exact kind of event you want to avoid at all costs.

Now that you realize the importance of going with a reputable developer, your next step is picking which one to choose. You definitely won’t lack for options these days, so just take your time to ensure you go with the best one for your business.

Mike Miranda writes about enterprise software and covers products offered by software companies like www.rocketsoftware.com about topics such as Terminal Emulation, Legacy Modernization, Enterprise Search, Big Data and Enterprise Mobility.
Categories: DBA Blogs

Measuring Tuxedo Queuing in the PeopleSoft Application Server

David Kurtz - Fri, 2015-09-04 09:13

Why Should I Care About Queuing?Queuing in the application server is usually an indicator of a performance problem, rather than a problem in its own right.  Requests will back up on the inbound queue because the application server cannot process them as fast as they arrive.  This is usually seen on the APPQ which is serviced by the PSAPPSRV process, but applies to other server processes too.  Common causes include (but are not limited to):
  • Poor performance of either SQL on the database or PeopleCode executed within the application server is extending service duration
  • The application server domain is undersized for the load.  Increasing the number of application server domains or application server process may be appropriate.  However, before increasing the number of server process it is necessary to ensure that the physical server has sufficient memory and CPU to support the domain (if the application server CPU is overloaded then requests move from the Tuxedo queues to the operating system run queue).
  • The application server has too many server processes per queue causing contention in the systems calls that enqueue and dequeue requests to and from IPC queue structure.  A queue with more than 8-10 application server processes can exhibit this contention.  There will be a queue of inbound requests, but not all the server processes will be non-idle.
When user service requests spend time queuing in the application server, that time is part of the users' response time.  Application server queuing is generally to be avoided (although it may be the least worst alternative). 
What you do about queuing depends on the circumstances, but it is something that you do want to know about.
3 Ways to Measure Application Server QueuingThere are a number of ways to detect queuing in Tuxedo
  • Direct measurement of the Tuxedo domain using the tmadmin command-line interface.  A long time ago I wrote a shell script tuxmon.sh.  It periodically runs the printqueue and printserver commands on an application server and extracts comma separated data to a flat that can then be loaded into a database.  It would have to be configured for each domain in a system.
  • Direct Measurement with PeopleSoft Performance Monitor (PPM).  Events 301 and 302 simulate the printqueue and printserver commands.  However, event 301 only works from PT8.54 (and at the time of writing I am working on a PT8.53 system).  Even then, the measurements would only be taken once per event cycle, which defaults to every 5 minutes.  I wouldn't recommend increasing the sample frequency, so this will only ever be quite a coarse measurement.
  • Indirect Measurement from sampled PPM transactions.  Although includes time spent on the return queue and to unpack the Tuxedo message.  This technique is what the rest of this article is about.
Indirectly Measuring Application Server Queuing from Transactional DataEvery PIA and Portal request includes a Jolt call made by the PeopleSoft servlet to the domain.  The Jolt call is instrumented in PPM as transaction 115.  Various layers in the application server are instrumented in PPM, and the highest point is transaction 400 which where the service enters the PeopleSoft application server code.  Transaction 400 is always the immediate child of transaction 115.  The difference in the duration of these transactions is the duration of the following operations:
  • Transmit the message across the network from the web server to the JSH.  There is a persistent TCP socket connection.
  • To enqueue the message on the APPQ queue (including writing the message to disk if it cannot fit on the queue).
  •  Time spent in the queue
  • To dequeue the message from the queue (including reading the message back from disk it was written there).
  • To unpack the Tuxedo message and pass the information to the service function
  • And then repeat the process for the return message back to the web server via the JSH queue (which is not shown  in tmadmin)
I am going make an assumption that the majority of the time is spent by message waiting in the inbound queue and that time spent on the other activities is negligible.  This is not strictly true, but is good enough for practical purposes.  Any error means that I will tend to overestimate queuing.
Some simple arithmetic can convert this duration into an average queue length. A queue length of n means that n requests are waiting in the queue.  Each second there are n seconds of queue time.  So the number of seconds per second of queue time is the same as the queue length. 
I can take all the sampled transactions in a given time period and aggregate the time spent between transactions 115 and 400.  I must multiply it by the sampling ratio, and then divide it by the duration of the time period for which I am aggregating it.  That gives me the average queue length for that period.
This query aggregates queue time across all application server domains in each system.  It would be easy to examine a specific application server, web server or time period.
WITH c AS (
SELECT B.DBNAME, b.pm_sampling_rate
, TRUNC(c115.pm_agent_Strt_dttm,'mi') pm_agent_dttm
, A115.PM_DOMAIN_NAME web_domain_name
, SUBSTR(A400.PM_HOST_PORT,1,INSTR(A400.PM_HOST_PORT,':')-1) PM_tux_HOST
, SUBSTR(A400.PM_HOST_PORT,INSTR(A400.PM_HOST_PORT,':')+1) PM_tux_PORT
, A400.PM_DOMAIN_NAME tux_domain_name
, (C115.pm_trans_duration-C400.pm_trans_duration)/1000 qtime
FROM PSPMAGENT A115 /*Web server details*/
, PSPMAGENT A400 /*Application server details*/
, PSPMSYSDEFN B
, PSPMTRANSHIST C115 /*Jolt transaction*/
, PSPMTRANSHIST C400 /*Tuxedo transaction*/
WHERE A115.PM_SYSTEMID = B.PM_SYSTEMID
AND A115.PM_AGENT_INACTIVE = 'N'
AND C115.PM_AGENTID = A115.PM_AGENTID
AND C115.PM_TRANS_DEFN_SET=1
AND C115.PM_TRANS_DEFN_ID=115
AND C115.pm_trans_status = '1' /*valid transaction only*/
--
AND A400.PM_SYSTEMID = B.PM_SYSTEMID
AND A400.PM_AGENT_INACTIVE = 'N'
AND C400.PM_AGENTID = A400.PM_AGENTID
AND C400.PM_TRANS_DEFN_SET=1
AND C400.PM_TRANS_DEFN_ID=400
AND C400.pm_trans_status = '1' /*valid transaction only*/
--
AND C115.PM_INSTANCE_ID = C400.PM_PARENT_INST_ID /*parent-child relationship*/
AND C115.pm_trans_duration >= C400.pm_trans_duration
), x as (
SELECT dbname, pm_agent_dttm
, AVG(qtime) avg_qtime
, MAX(qtime) max_qtime
, c.pm_sampling_rate*sum(qtime)/60 avg_qlen
, c.pm_sampling_rate*count(*) num_services
GROUP BY dbname, pm_agent_dttm, pm_sampling_rate
)
SELECT * FROM x
ORDER BY dbname, pm_agent_dttm
  • Transactions are aggregated per minute, so the queue time is divided by 60 at the end of the calculation because we are measuring time in seconds.
Then the results from the query can be charted in excel (see http://www.go-faster.co.uk/scripts.htm#awr_wait.xls). This chart was taken from a real system undergoing a performance load test, and we could see


Is this calculation and assumption reasonable?The best way to validate this approach would be to measure queuing directly using tmadmin.  I could also try this on a PT8.54 system where event 301 will report the queuing.  This will have to wait for a future opportunity.
However, I can compare queuing with the number of busy application servers at reported by PPM event 302 for the CRM database.  Around 16:28 queuing all but disappears.  We can see that there were a few idle application servers which is consistent with the queue being cleared.  Later the queuing comes back, and most of the application servers are busy again.  So it looks reasonable.
Application Server Activity

Oracle Priority Support Infogram for 03-SEP-2015

Oracle Infogram - Thu, 2015-09-03 17:58

RDBMS



SQL


SQL Developer


Scripting Oracle

node-oracledb 1.1.0 is on NPM (Node.js add-on for Oracle Database), from Scripting and Oracle: Christopher Jones.

MySQL

MySQL Enterprise Monitor 2.3.21 has been released, from MySQL Enterprise Tools Blog.

Solaris


SOA

Top tweets SOA Partner Community – August 2015, from SOA & BPM Partner Community Blog.

Java



Hyperion

Patch Set Update: Hyperion Strategic Finance 11.1.2.3.507, from Business Analytics – Proactive Support.

From the same source:


EBS

From the Oracle E-Business Suite Support blog:






From the Oracle E-Business Suite Technology blog:


…and Finally




node-oracledb 1.1.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Thu, 2015-09-03 17:24

Version 1.1 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM

This is a stabilization release, with one improvement to the behavior of the local connection pool. The add-on now checks whether pool.release() should automatically drop sessions from the connection pool. This is triggered by conditions where the connection is deemed to have become unusable. A subsequent pool.getConnection() will, of course, create a new, replacement session if the pool needs to grow.

Immediately as we were about to release, we identified an issue with lobPrefetchSize. Instead of delaying the release, we have temporarily made setting this attribute a no-op.

The changes in this release are:

  • Enhanced pool.release() to drop the session if it is known to be unusable, allowing a new session to be created.

  • Optimized query memory allocation to account for different database-to-client character set expansions.

  • Fixed build warnings on Windows with VS 2015.

  • Fixed truncation issue while fetching numbers as strings.

  • Fixed AIX-specific failures with queries and RETURNING INTO clauses.

  • Fixed a crash with NULL or uninitialized REF CURSOR OUT bind variables.

  • Fixed potential memory leak when connecting throws an error.

  • Added a check to throw an error sooner when a CURSOR type is used for IN or IN OUT binds. (Support is pending).

  • Temporarily disabled setting lobPrefetchSize

Issues and questions about node-oracledb can be posted on GitHub or OTN. We need your input to help us prioritize work on the add-on. Drop us a line!

Installation instructions are here.

job_name cannot be null

Laurent Schneider - Wed, 2015-09-02 02:23


exec dbms_scheduler.create_job(job_name=>null,job_type=>'PLSQL_BLOCK',job_action=>'BEGIN NULL; END;')
ORA-27451: JOB_NAME cannot be NULL
ORA-06512: at "SYS.DBMS_ISCHED", line 146
ORA-06512: at "SYS.DBMS_SCHEDULER", line 288
ORA-06512: at line 1

This sounds like a proper error message. A bit less obvious is the drop_job message


SQL> exec dbms_scheduler.drop_job(job_name=>null)
ORA-20001: comma-separated list invalid near
ORA-06512: at "SYS.DBMS_UTILITY", line 236
ORA-06512: at "SYS.DBMS_UTILITY", line 272
ORA-06512: at "SYS.DBMS_SCHEDULER", line 743
ORA-06512: at line 1

comma-separated list invalid near what?

Ok, why would you create an empty job? Obviously you wouldn’t. But remember job_name could be a very long expression that won’t fit in your VARCHAR2(30) variable.


SQL> begin 
  dbms_scheduler.create_job(job_name=>
'                  "SCOTT"                    '||
'                     .                       '||
'             "JOB10000000000000000000001"    ',
    job_type=>'PLSQL_BLOCK',
    job_action=>'BEGIN NULL; END;');
end;
/

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.drop_job('scott.job10000000000000000000001')

PL/SQL procedure successfully completed.

If you use drop job in the exception clause without catching the exception of the exception, it could lead to this ORA-20001 if job name is null

For exception handling, we could improve


BEGIN
  CREATE JOB 
  RUN JOB
  DROP JOB
EXCEPTION
  WHEN OTHERS THEN
    DROP JOB
    output message
    RAISE
END

into

BEGIN
  CREATE JOB 
  RUN JOB
  DROP JOB
EXCEPTION
  WHEN OTHERS THEN
    BEGIN
      DROP JOB
    EXCEPTION 
      WHEN IS_RUNNING
         sleep
      WHEN OTHERS
         output message
    END LOOP
    output message
    RAISE
END

Loose Rules Sink Fools

Bradley Brown - Tue, 2015-09-01 22:11
You've probably heard someone talk about how awesome it is that they get to bring their dog to work.  Or how there is NO dress code.  Or they have unlimited vacation.

Many corporate cultures have what I would call loose rules.  Most people do not like confrontation - in fact, many people will do anything to avoid it.  Which means they will not tell you when you're in the gray area of the corporate culture - but believe me, they take note.

Loose rules include dress code to bringing dogs to work and many others.  Does dress code matter for the job you're doing?  What if you're in sales and NEVER leave the office?  What if you RARELY leave the office, but sometimes you do?  What if the CEO stops by and says "hey, can you run to this meeting with me (or for me) tonight.  Then they look at how you're dressed and say "maybe next time?"  Yes, you're within the "rules" but it WILL affect your career - and not usually in a positive way.

Bring your dog to work.  Where's the line?  What if someone gets a puppy, which they bring to work every day?  Yes, puppies are cute, REALLY cute. What if you're the person that brought the puppy in every day?  You spend an hour taking the puppy for a walk, taking it outside (while it has you trained rather than visa versa).  Or worse yet, every time your boss walks by, someone is distracted by petting your puppy?  They cute!  Too cute to pass!  You're negatively affecting productivity.  Again, this is not likely to work well for your career.

Many companies have gone to unlimited vacation.  What if 2 weeks after you start, you take a 2 month vacation?  According to policy that's OK, but I wouldn't expect you to have a job when you return.  Where's the line?  What's gray?   What's acceptable?

Are you within the rules in each of the above examples?  Absolutely, but loose rules have unspoken rules.  Rules nobody will actually admit to quite often.  Startups go through a number of stages in a very short period of time.  People are evaluated regularly based on the current company stage.  Some people survive from one stage to the next and others do not.  Those who are not performing or viewed as not performing (e.g. taking care of their puppy all day) do not.

Don't let the loose rules sink your career!

Integrating Telstra Public WIFI API into Bluemix

Pas Apicella - Tue, 2015-09-01 19:51
I previously blogged about Integrating Telstra Public SMS Api as shown below.

http://theblasfrompas.blogspot.co.nz/2015/08/integrating-telstra-public-sms-api-into.html

Here I show how I integrated Telstra Public WIFI Api into IBM Bluemix. This Api from Telstra is documented as follows. You need to register on http://t.dev to get the credentials to use thier API which I have previously done which then enables me to integrate it onto Bluemix

https://dev.telstra.com/content/wifi-api

Once again here is the Api Within the Bluemix Catalog, these screen shots show the Api has been added to the Bluemix Catalog which can then be consumed as a service.



Finally here is a Web based application using Bootstrap so it renders quite well on mobile devices as well which allows you to enter your Latitidue, Longitude and Radius to find Telstra WIFI Hotspots using the Telstra WIFI Api on IBM Bluemix

http://pas-telstawifiapi.mybluemix.net/


More Information

Visit http://bluemix.net to get started
Categories: Fusion Middleware

What is the most dangerous food at Olive Garden?

Nilesh Jethwa - Mon, 2015-08-31 22:24

Olive Garden is one of the favorite destination for Italian food and today we got hold of the entire Olive Garden menu along with their nutrition data.

A typical meal at Olive Garden starts with a drink, appetizers [free bread sticks], main dish and finally the desert.

So going in the same sequence let see what the data menu for Wine and Beer has to offer.

Amount of Carbs per serving in your favorite Wine at Olive Garden

Read more at: http://www.infocaptor.com/dashboard/what-is-the-most-dangerous-food-at-olive-garden

Search plugins: Search Oracle docs from your browser search bar

RDBMS Insight - Mon, 2015-08-31 13:42

Tired of navigating to the SQL documentation every time you need to look up syntax? I created a search plugin so that you can search the SQL documentation directly from your browser’s search bar:

search

If you’re going to be doing a lot of looking-up, you can make this your default search engine. Click on “Change search settings” in the search bar dropdown, or go to Preferences > Search and select it:

change default search

I also created a search plugin for the 12c database documentation as a whole.

To install either or both plugins in Firefox, go to this page and click on the “Install Search Plugin” button.

Tested in Firefox. OpenSearch plugins reportedly work in IE too. Use IE? Try it and let me know if it works for you.

UPDATE: I added these plugins to mycroftproject.com as well. Thanks, Uwe, for pointing me to it! Also, check out Uwe’s OERR search plugin in the comments below.

Categories: DBA Blogs

The Golden Path

Floyd Teter - Mon, 2015-08-31 12:07
Geek Warning:  The Golden Path is a term in Frank Herbert's fictional Dune universe referring to Leto II Atreides's strategy to prevent humanity's ultimate destruction.

Just back from a little "stay-cation".  My batteries were running a little low, so it was good to recharge for a bit.  The #Beat39 theme continued to roll around in my brain and I want to share a predominant line of thinking from that.

Back in the olden days when Oracle was first developing Fusion Applications, they made a big effort to discover common threads of business practices across a range of industries and organizations. Processing invoices, controlling inventory, managing employee performance reviews, completing projects, billing customers...it's a long list of common business practices and common activities.

The result of that effort was a set of common "best practices", by industry, that were baked into Fusion Applications.  That collection of best practices became known as the Oracle Business Process Model ("Oracle BPM").  You can see an example for the Project Portfolio Management Suite here.  As Fusion Applications have evolved into Oracle Cloud Application Services (Oracle's SaaS offerings), Oracle BPM has evolved right along with it.  You'll find the latest Oracle BPM in Oracle SaaS.

Back in the really olden days, customers and their implementation partners would generally follow a three-step implementing strategy:  1) understand the customer's current business process; 2) design the customer's future business process; 3) implement enterprise software to model the customer's future business process as closely as possible.

With today's SaaS applications, customers may be better served by following a different strategy: 1) configure a SaaS zone and test the "baked in" business processes with an eye toward utilizing those processes in your own organization; 2) address and resolve any business process gaps; 3) test and go live.  In short, maximize your use of enterprise software in the way the software was designed to be used, business processes and all.  Being open to business process change is the "Golden Path" to a successful SaaS implementation.

While this idea is nothing new, it's a pretty fundamental shift in perspective.  Thoughts?  Comments welcome.

A Startup is Like Making a Trip to Each Continent

Bradley Brown - Sun, 2015-08-30 22:43
Maybe you realize how difficult it is to go from an idea to a successful company.  Maybe you've attempted one (or more) yourself.  Is the American dream a startup?  It is for some, but not all.  If you start a company and you're the only employee and you don't get paid, is your business a startup?  Is it successful?  Beauty and success are in the eye of beholder.  You might hear people suggest you set your sites and expectations lower so you guarantee success.  In fact, you might hear a lot of different conflicting things over your life.  In Founders Institute we call this "mentor whiplash."  One mentor says yes, another so no.  One says go and another says stop.  There's no right way to do most things in the startup world.  Set your own goals - think about your definition of success and strive for it...every day.

One thing I do know is that MOST people need a partner or partners to "complete them."  In other words, we all have gaps (and strengths) in our personality.  Your partners should fill your gaps.  If they don't, your company will have a gap.  If you're a perfectionist, you're going to want to find a partner who isn't.  If you're a detailed person, you're going to find a partner who isn't.  If you don't have any gaps, congrats!  I have many!

So how is a startup similar to making a trip to every continent?  First, when I say startup, I'm referring to a funded startup.  If you're trying to build a lifestyle company (i.e. one that provides a nice lifestyle for you), don't take someone else's money to do this - they will NOT be happy.  If you can build a business on your own and without any money, that's a dream come true for many.  That's not my dream.  So like I said, the basic premise here is that you're building a business that is going to require capital (i.e. money) to get it going.

When do you raise your first dollar?  My preference is to raise money as soon as you can.  In other words, don't spend any more of your own money to start the business than you absolutely need to.  Even if you have money.  Why?  Because in my view, if you can't get someone else to believe in your idea, it probably isn't a fundable idea.

So let's say you start your trip in a rich continent such as North America.  You can get to a lot of places by driving around.  Some might be safe, some might not.  In fact, you can probably get to your second continent without hopping on a plane - i.e. South America.  We could say that's similar to your "self funding" stage of your business.  At some point you're going to have to take off on a plane to get to the next continent.  In the startup world, continents are like "fund raising series."

The first round of funding (continent 1 - North America) is your own cash.  Make this your smallest and quickest continent to get behind you.  The second round (continent 2 - South America) is your seed round.  This is sometimes called the friends and family round.  Get people believing in your idea ASAP.  Get your seed funding ASAP.  The 3rd round, your Series A round (continent 3 - Australia) is a stretch.  It's a long and big flight.  Most companies frankly never get to this phase.  Maybe you're able to go to Europe instead and the flight is shorter.  It's still a long way and you better have the fuel you need to get to that next round.

So yes, if you start in North America, using your own money, you might have a lot of money.  You might have a nice vehicle to get to your next continent.  But at some point, to get to continent 3 of 7, you're going to have to take off.  When are you going to fuel up next?  Are you going to attempt to fly around the entire world without refueling?  Of course not.  That would be similar to trying to raise $1B in your first round of funding.  It will never happen.

Seed round valuations tend to range from $1M to $3M.  If you're a proven startup person and 100s of other "ifs," you're probably not reading my blog, but yes, you're seed round valuation would be higher.  If you only have an idea, you're initial valuation may be $0 and it could be less than $1M.

If you try to fuel up in the middle of the ocean, you're going to run out of fuel and crash.  If you stop in a dangerous place (i.e. the wrong investors, investors who run out of money, investors with the wrong expectations, etc), you could also be extorted for money.  Depending on where you stop, fuel could be reasonable or it could be expensive.  The more you need the fuel, the more expensive it is.  All of this is true in the startup world too.  If you don't make it to the next continent (funding stage) with your fuel (funding), you're done.  If you stop in the wrong place (i.e. you don't have the right metrics in place by the time you get there), your next funding round might not be fatal, but it could be VERY expensive (i.e. a low valuation equal a high percentage of stock you give up for very little money).


It's important to plan your trip (i.e. startup).  It's important that you can make it to the next continent safely.  Where you start is important.  Focus is important.  If you deliver a message like "we'll do that" for everything a potential investor brings up, they will not want to give you the fuel (cash) you need.  What if you pulled into a gas station (called an FBO in the flight world) with $1000 on you.  Let's say the FBO has a casino in it.  On your walk to the casino you notice a bar, so you stop in there.  You go into the casino and bet $500 on black.  What would your crew (your employees) have to say about your behavior?  A bit confused about the goal?  A lack of focus on the goal here?  What would your investors (i.e. the FBO employees) feel about your ability to pay for the fuel?  Have a mission and live for it.  Know what you need to do and do it.

Whatever you do, bet the farm on your focus.  One thing, not 2, not 3.  Don't try to be more than one thing.  Don't fuel up or raise money in the wrong location (i.e. bad investors) or at the wrong time (i.e. middle of the ocean).  Keep in mind that you MUST get to the next continent with whatever you have from your last funding round.  In other words, DO NOT try to raise money $30k at a time.  You'll spend all of your money raising money and you'll end up in the middle of the ocean without any more fuel.  Each round needs to get you to the next round - or...it's going to be costly or deadly.


Prepare for your trip.  Be ready.  Your investors are your lifeline to the next continent.  Respect them and do what they expect you to do to get to the next round.  If you don't, you both lose - unless they get your company and then they do something good with it.  But that's not what they want.  That's not what you want.  Good luck and safe travels!

Thank you to Daniel Feher for the use of his amazing maps.  You can find more great maps on his website: www.freeworldmaps.net.

Using DBMS_OUTPUT with Node.js and node-oracledb

Christopher Jones - Sun, 2015-08-30 20:20

The DBMS_OUTPUT package is the standard way to "print" output from PL/SQL. The way DBMS_OUTPUT works is like a buffer. Your Node.js application code turns on DBMS_OUTPUT buffering, calls some PL/SQL code that puts text into the buffer, and then later fetches from that buffer. Note: any PL/SQL code that calls DBMS_OUTPUT runs to completion before any output is available to the user. Also, other database connections cannot access your buffer.

A basic way to fetch DBMS_OUTPUT with node-oracledb is to bind an output string when calling the PL/SQL dbms_output.get_line() procedure, print the string, and then repeat until there is no more output. Another way that I like is to wrap the dbms_output.get_line() call into a pipelined function and fetch the DBMS_OUTPUT using a SQL query.

The following code shows both methods.

/*
  NAME
    dbmsoutput.js

  DESCRIPTION
    Shows two methods of displaying PL/SQL DBMS_OUTPUT in node-oracledb.
    The second method depends on these PL/SQL objects:

      create or replace type dorow as table of varchar2(32767);
      /
      show errors

      create or replace function mydofetch return dorow pipelined is
        line varchar2(32767);
        status integer;
        begin loop
          dbms_output.get_line(line, status); 
          exit when status = 1;
          pipe row (line);
        end loop;
      return; end;
      /
      show errors

*/

'use strict';

var async = require('async');
var oracledb = require('oracledb');
var dbconfig = require('./dbconfig.js');

oracledb.createPool(
  dbconfig,
  function(err, pool) {
    if (err)
      console.error(err.message)
    else
      doit(pool);
  });

var doit = function(pool) {
  async.waterfall(
    [
      function(cb) {
        pool.getConnection(cb);
      },

      // Tell the DB to buffer DBMS_OUTPUT
      enableDbmsOutput,

      // Method 1: Fetch a line of DBMS_OUTPUT at a time
      createDbmsOutput,
      fetchDbmsOutputLine,

      // Method 2: Use a pipelined query to get DBMS_OUTPUT 
      createDbmsOutput,
      function(conn, cb) {
        executeSql(
          conn,
          "select * from table(mydofetch())", [], { resultSet: true}, cb);
      },
      printQueryResults
    ],
    function (err, conn) {
      if (err) { console.error("In waterfall error cb: ==>", err, "<=="); }
      conn.release(function (err) { if (err) console.error(err.message); });
    }
  )
};

var enableDbmsOutput = function (conn, cb) {
  conn.execute(
    "begin dbms_output.enable(null); end;",
    function(err) { return cb(err, conn) });
}

var createDbmsOutput = function (conn, cb) {
  conn.execute(
    "begin "
     + "dbms_output.put_line('Hello, Oracle!');"
     + "dbms_output.put_line('Hello, Node!');"
     + "end;",
    function(err) { return cb(err, conn) });
}

var fetchDbmsOutputLine = function (conn, cb) {
  conn.execute(
    "begin dbms_output.get_line(:ln, :st); end;",
    { ln: { dir: oracledb.BIND_OUT, type:oracledb.STRING, maxSize: 32767 },
      st: { dir: oracledb.BIND_OUT, type:oracledb.NUMBER } },
    function(err, result) {
      if (err) {
        return cb(err, conn);
      } else if (result.outBinds.st == 1) {
        return cb(null, conn);  // no more output
      } else {
        console.log(result.outBinds.ln);
        return fetchDbmsOutputLine(conn, cb);
      }
    });
  }
               
var executeSql = function (conn, sql, binds, options, cb) {
  conn.execute(
    sql, binds, options,
    function (err, result) {
      if (err)
        cb(err, conn)
      else
        cb(null, conn, result);
    });
}

var printQueryResults = function(conn, result, cb) {
  if (result.resultSet) {
    fetchOneRowFromRS(conn, result.resultSet, cb);
  } else if (result.rows && result.rows.length > 0) {
    console.log(result.rows);
    return cb(null, conn);
  } else {
    console.log("No results");
    return cb(null, conn);
  }
}

function fetchOneRowFromRS(conn, resultSet, cb) {
  resultSet.getRow(  // note: getRows would be more efficient
    function (err, row) {
      if (err) {
        cb(err, conn);
      } else if (row) {
        console.log(row);
        fetchOneRowFromRS(conn, resultSet, cb);
      } else {
        cb(null, conn);
      }
    });
}

The output is:

Hello, Oracle!
Hello, Node!
[ 'Hello, Oracle!' ]
[ 'Hello, Node!' ]

I used resultSet.getrow() for simplicity, but you will probably want to use resultSet.getRows() for efficiency. If you want to buffer all the output in the Node.js application, Bruno Jouhier has a nice implementation to build up an array of query output in his GitHub gist query-all.js.

Pages

Subscribe to Oracle FAQ aggregator