Feed aggregator

Spring Session - Spring Boot application for IBM Bluemix

Pas Apicella - Thu, 2015-09-10 07:28
The following guide shows how to use Spring Session to transparently leverage Redis to back a web application’s HttpSession when using Spring Boot.

http://docs.spring.io/spring-session/docs/current/reference/html5/guides/boot.html

The demo below is a simple Spring Boot / Thymeleaf/ Bootstrap application to test Session replication using Spring Session - Spring Boot within IBM Bluemix. Same demo will run on Pivotal Cloud Foundry as well.

IBM DevOps URL ->

https://hub.jazz.net/project/pasapples/SpringBootHTTPSession/overview

Sample Project on GitHub ->

https://github.com/papicella/SpringBootHTTPSession



More Information

The Portable, Cloud-Ready HTTP Session
https://spring.io/blog/2015/03/01/the-portable-cloud-ready-http-session
Categories: Fusion Middleware

Is Apache Spark becoming a DBMS?

Dylan's BI Notes - Wed, 2015-09-09 22:02
I attended a great meetup and this is the question I have after the meeting. Perhaps the intent is to make it like a DBMS, like Oracle, or even a BI platform, like OBIEE? — The task flow it actually very similar to a typical database profiling and data analysis job. 1. Define your question […]
Categories: BI & Warehousing

Amazon S3 to Glacier - Cloud ILM

Pakistan's First Oracle Blog - Wed, 2015-09-09 19:27
Falling in love with Kate Upton is easy but more easier is to be swept off your feet by information lifecycle management (ILM) in the Amazon Web Services (AWS). Simple, easily-configurable, fast, reliable, cost effective and proven are the words which describe it.

Pythian has been involved with ILM for a long time. With various flavors of databases and systems, Pythian has been overseeing creation, alteration, and flow of data for a long time until it becomes obsolete. That is why AWS's ILM resonates perfectly well with Pythian's expertise.

Amazon S3 is an object store for short term storage, whereas Amazon Glacier is their cloud archiving offering or storage for long term. Rules can be defined on the information to specify and automate its lifecycle.

Following screenshot shows the rules being configured on objects from S3 bucket to Glacier and then permanent deletion. 90 days after creation if an object, it will be moved to Glacier, and then after 1 year, it will be permanently deleted. Look at the graphical representation of lifecycle as how intuitive it is.



Categories: DBA Blogs

Oracle Priority Support Infogram for 09-SEP-2015

Oracle Infogram - Wed, 2015-09-09 15:49

RDBMS

Database Insider - September 2015 issue now available, from Exadata Partner Community EMEA.

Some good posts this week over at Update your Database – NOW!






SQL

If you’ve never been to the Ask Tom site and you have anything to do with Oracle technologies well, where have you been hanging out? Always one of the best sources for SQL/PL/SQL Oracle internals, design, etc. See this posting for an update: Ask Tom Questions: the Good, the Bad and the Ugly, from All Things SQL.

OVM

Oracle VM VirtualBox 5.0.4 now available!, from Oracle’s Virtualization Blog.

Big Data


WebLogic

Additional new material WebLogic Community, from WebLogic Partner Community EMEA.

ADF

Insert and show whitespace in ADF Faces Components, from WebLogic Partner Community EMEA.


Mobile Computing

Quick Tip: Multi Line Labels on Command Button, from The Oracle Mobile Platform Blog.

EBS

From the Oracle E-Business Suite Support blog:





From the Oracle E-Business Suite Technology blog:

Are AMP Support Dates Based on EBS or EM Releases?

Analyze database activity using v$log_history

DBA Scripts and Articles - Wed, 2015-09-09 13:26

The v$log_history view contains important information on how application’s users use the database , this view can help you define periods with the most activity in the database. v$log_history queries You can adapt the query to your needs, you just have to change the way you format the date to be able to drilldown to the … Continue reading Analyze database activity using v$log_history

The post Analyze database activity using v$log_history appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Analyze database activity using v$log_history

DBA Scripts and Articles - Wed, 2015-09-09 13:26

The v$log_history view contains important information on how application’s users use the database , this view can help you define periods with the most activity in the database. v$log_history queries You can adapt the query to your needs, you just have to change the way you format the date to be able to drilldown to the … Continue reading Analyze database activity using v$log_history

The post Analyze database activity using v$log_history appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

JavaScript stored procedures as Cloud data services.

Kuassi Mensah - Tue, 2015-09-08 17:37
Find out how to implement JavaScript Stored Procedures with Oracle Database 12c and how to invoke these through RESTful Web Services.

https://blogs.oracle.com/java/entry/nashorn_and_stored_procedures

ORA-01034: ORACLE not available

VitalSoftTech - Mon, 2015-09-07 20:28
What is the cause of the error ORA-01034 "ORACLE not available"?
Categories: DBA Blogs

Every Word Counts: Translating the Oracle Applications Cloud User Experience

Usable Apps - Mon, 2015-09-07 07:06
Loic Le Guisquet. Image by Oracle PR.

"Successfully crossing new frontiers in commerce needs people who understand local preferences as well as global drivers. In addition, technology has also been a great enabler of globalization, so the right balance between people and tech is key to success."

- Loïc Le Guisquet, Oracle President for EMEA and APAC

Oracle's worldwide success is due to a winning combination of smart people with local insight and great globalized technology. The Oracle Applications Cloud experience (UX)—that competitive must-have and differentiator—is also a story of global technology and empathy for people everywhere.

UX provides for the cultural dynamics of how people work, the languages they speak, and local conventions and standards on the job. So, how do we deliver global versions of SaaS? Oracle Applications UX Communications and Outreach's Karen Scipi (@karenscipi) explains:

How We Build for Global Users

Oracle Applications Cloud is currently translated into 23 natural languages, besides U.S. English, using a process that ensures translated versions meet the latest user expectations about language, be it terminology, style, or tone.

Oracle HCM Cloud R10 Optimized for Global Working on YouTube

Global Workforce Optimization with Oracle HCM Cloud Release 10: More than 220 countries or jurisdictions supported.

Oracle Applications Cloud is designed for global use and deployment, leveraging Oracle ADF’s built-in internationalization (i18n) and translatability support to make development and translation easy. For example:

  • Translatable text is stored separately (externalized) from the application code for each language version (called a National Language Support [NLS] version).
  • Externalized text is contained in industry-standard XML Localization Interchange File Format (XLIFF)-based resource bundles, enabling not only safe, fast translation but also easy maintenance on a per language basis.
  • Currency, date, time, characters, reading and writing directions, and other local standards and conventions are automatically built in for developers. Oracle ADF uses the industry-standard i18n support of Oracle Java and Unicode.

In addition:

  • Users can enter and display data in their language of choice, independent of the language of the user interface: relying on what we call multilingual support (or MLS) architecture.
  • The software includes global and country-specific localizations that provide functionality for country- and region-specific statutory regulatory requirements, compliance reporting, local data protection rules, business conventions, organizational structure, payroll, and other real-world necessities for doing business with enterprise software.
  • Users can switch the language of their application session through personalization options.
  • NLS versions can be customized and extended in different languages by using Oracle composer tools to align with to align with their business identity and process. Translated versions too rely on the same architecture as the U.S. version for safe customizations and updates.

How We Translate

During development, the U.S. English source text is pseudo-translated using different language characters (such as symbols, Korean and Arabic characters), "padded" to simulate the longer words of other languages, and then tested with international data by product teams. This enables developers to test for translation and internationalization issues (such as any hard-coded strings still in English, or spacing, alignment, and bi-directional rendering issues) before external translation starts.

Hebrew version of Oracle Sales Cloud Release 8

Internationalized from the get-go: Oracle Sales Cloud in Hebrew (Release 8) shows the built-in bi-directional power of Oracle ADF.

For every target language, the Oracle Worldwide Product Translation Group (WPTG) contracts with professional translators in each country to perform the translation work. Importantly, these in-country translators do not perform literal translations of content but use the choice terms, style, and tone that local Oracle WPTG language specialists specify and that our applications users demand in each country or locale.

Mockup of French R10 Oracle Sales Cloud

Mockup of an Oracle Sales Cloud landing page in French. (Image credit: Laurent Adgie, Oracle Senior Sales Consultant)

NLS versions of Oracle Applications Cloud are made available to customers at the same time as the U.S. English version, released as NLS language packs that contain the translated user interface (UI) text, messages, and embedded help for each language. The secret sauce of this ability to make language versions available at the same time is a combination of Oracle technology and smart people too: translation, in fact, begins as soon as the text is created, and not when it's released! 

And, of course, before the NLS versions of Oracle Applications Cloud are released, Oracle language quality and functional testing teams rigorously test them.

The Language of Choice

Imagine an application that will be used in North America, South America, Europe, and Asia. What words should you choose for the UI?

  • The label Last Name or Surname?
  • The label Social Security Number, Social Insurance Number, or National Identification Number?
  • The MM-DD-YYYY, DD-MM-YYYY, or YYYY-MM-DD date format?

The right word choice for a label in one country, region, or protectorate is not necessarily the right word choice in another. Insight and care is needed in that decision. Language is a critical part of UX and, in the Oracle Applications Cloud UX, all the text you see is written by information development professionals, leaving software developers free to concentrate on building the applications productively and consistently using UX design patterns based on Oracle ADF components.

Our focus on language design—choosing accurate words and specialized terms and pairing them with a naturally conversational voice and tone—and providing descriptions and context for translators and customizers alike-also enables easy translation. Translated versions of application user interface pages are ultimately only as accurate, clear, and understandable as their source pages.

In a future blog post we'll explore how PaaS4SaaS partners and developers using the Oracle Applications Cloud Simplified UX Rapid Development Kit can choose words for their simplified UIs that will resonate with the user’s world and optimize the overall experience.

For More Information

For insights into language design and translation considerations for Oracle Applications Cloud and user interfaces in general, see the Oracle Not Lost in Translation blog and Blogos.

Solaris VM Templates for WebLogic Server 12.1.3

Steve Button - Sun, 2015-09-06 19:45
A new set of VM Templates for Solaris have been made available on OTN. 
These templates  provide a quick and easy way to spin up pre-built WebLogic Server 12.1.3 instances using either Solaris Zones or Oracle VM Server for Sparc.

http://www.oracle.com/technetwork/server-storage/solaris11/downloads/solaris-vm-2621499.html


Last Successful login time in 12c

Pakistan's First Oracle Blog - Sun, 2015-09-06 03:24
One cool small yet valuable feature in Oracle 12c is the display of 'Last Successful login time'. If authentication is from the OS level, then it isn't shown. A small demo is as follows:




[oracle@targettest ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.1.0 Production on Sun Sep 6 18:22:00 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@targettest ~]$ sqlplus 'hr/hr'

SQL*Plus: Release 12.1.0.1.0 Production on Sun Sep 6 18:22:07 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Sun Sep 06 2015 18:21:56 +10:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@targettest ~]$
[oracle@targettest ~]$ sqlplus 'hr/hr' as sysbackup

SQL*Plus: Release 12.1.0.1.0 Production on Sun Sep 6 18:22:12 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@targettest ~]$
Categories: DBA Blogs

\d in Vertica

Pakistan's First Oracle Blog - Sat, 2015-09-05 00:13
A quick neat way to list down important and oft-needed information like names of databases, schemas, users, tables, projections etc. We can also use patterns with the '\d' to narrow down the results. Let's see it in action:



Connect with Vertica vsql:

vsql  -U dbadmin -w vtest -h 0.0.0.0 -p 5433 -d vtest

 Welcome to vsql, the Vertica Analytic Database interactive terminal.
Type:  \h or \? for help with vsql commands
\g or terminate with semicolon to execute query
\q to quit
vtest=>
vtest=> \dn

List of schemas
Name     |  Owner  | Comment
--------------+---------+---------
v_internal   | dbadmin |
v_catalog    | dbadmin |
v_monitor    | dbadmin |
public       | dbadmin |
TxtIndex     | dbadmin |
store        | dbadmin |
online_sales | dbadmin |
mytest       | mytest  |
(8 rows)
vtest=> \dn mytest

List of schemas
Name  | Owner  | Comment
--------+--------+---------
mytest | mytest |
(1 row)
vtest=> \dn my*

List of schemas
Name  | Owner  | Comment
--------+--------+---------
mytest | mytest |
(1 row)
vtest=> \dn v

List of schemas
Name | Owner | Comment
------+-------+---------
(0 rows)
vtest=> \dn *v*

List of schemas
Name    |  Owner  | Comment
------------+---------+---------
v_internal | dbadmin |
v_catalog  | dbadmin |
v_monitor  | dbadmin |
(3 rows)

Likewise you can list down other information like :
vtest=> \dj

List of projections
Schema    |            Name             |  Owner  |       Node       | Comment
--------------+-----------------------------+---------+------------------+---------
mytest       | ptest                       | mytest  | v_vtest_node0002 |
mytest       | testtab_super               | mytest  |                  |

To list down views:
vtest=> \dv
No relations found.

If you connect with the mytest user and run:
vtest=> \dt

List of tables
Schema |  Name   | Kind  | Owner  | Comment
--------+---------+-------+--------+---------
mytest | testtab | table | mytest |
(1 row)

Following are more '\d' options from help:
Informational

\d [PATTERN]   describe tables (list tables if no argument is supplied)
PATTERN may include system schema name, e.g. v_catalog.*
\df [PATTERN]  list functions
\dj [PATTERN]  list projections
\dn [PATTERN]  list schemas
\dp [PATTERN]  list table access privileges
\ds [PATTERN]  list sequences
\dS [PATTERN]  list system tables. PATTERN may include system schema name
such as v_catalog, v_monitor, or v_internal.
Example: v_catalog.a*
\dt [PATTERN]  list tables
\dtv [PATTERN] list tables and views
\dT [PATTERN]  list data types
\du [PATTERN]  list users
\dv [PATTERN]  list views
Categories: DBA Blogs

The Importance of Data Virtualization

Kubilay Çilkara - Fri, 2015-09-04 12:37
It’s been a long time since the way data was stored and accessed has been addressed. We went from scrolls to books to mainframes and this last method hasn’t budged all that much over the last decade or so. This is despite the fact that we keep creating more and more information, which means that better ways for storing and finding it would make a world of difference. Fortunately, this is where data virtualization comes into play. If your company isn’t currently using this type of software, you’re missing out on a better way of leveraging your organization’s data.

Problems with Traditional Methods

No matter what line of work you’re in, your company is creating all kinds of information each and every business day. We’re not just talking about copy for your website or advertising materials either. Information is created with each and every transaction and just about any time you interact with a customer.

You’re also not just creating information in these moments. You need to access stored data too. If you don’t do this well—and many, many companies don’t—you’re going to end up with a lot of unnecessary problems. Of course, you’re also wasting a lot of money on all that info you’re creating but not using.

The main problem with traditional methods of data storage and retrieval is that they rely on movement and replication processes and intermediary servers and connectors for integrating via a point-to-point system. This worked for a long time. For a few companies, it may seem like it’s still working to some degree.

However, there’s a high cost to this kind of data movement process. If you’re still trying to shoulder it, chances are your overhead looks a lot worse than competitors that have moved on.
That’s not the only problem worth addressing, though. There’s also the huge growth of data that’s continuing to head north. We’ve touched on this, but the problem is so prevalent that there’s a term for it throughout every industry: Big Data. Obviously, it refers to the fact that there is just so much information out there, but the fact this term exists also shows how much it affects all kinds of companies.

Big Data is also a statement about its creation. Its sheer proliferation is massive. Every year, the amount increases at an exponential rate. There’s just no way the old methods for storing and finding it will work.

Finally, customers expect that you’ll be able to find whatever information you need to meet their demands. Whether it’s actual information they want from you or it’s just information you need to carry out their transaction, most won’t understand why this isn’t possible. After all, they use Google and other search engines every day. If those platforms can find just about anything for free, why can’t your company do the same?

The Solution Is Data Virtualization

If all of the above sounded a bit grim, don’t worry. There’s a very simple solution waiting for you in data virtualization. This type of software overcomes the challenges that come with your data being stored all over your enterprise. You never again have to run a million different searches to collect it all or come up with some halfway decent workaround. Finally, there’s a type of platform made specifically for your company’s data gathering needs.
This software isn’t just effective. It’s convenient too. You work through one, single interface and can have access to the entirety of your company’s data store. What it does is effectively separate the external interface from the implementation going on inside.

How It Works

Every data virtualization platform is going to have its own way of working, so it’s definitely worth researching this before putting your money behind any piece of software. However, the basic idea remains the same across most titles.
With virtualization software, the data doesn’t have to be physically moved because this technology uses meta data to create a virtual perspective of the sources you need to get to. Doing so provides a faster and much more agile way of getting to and combining data from all the different sources available:
·      Distributed
·      Mainframe
·      Cloud
·      Big Data
As you can probably tell, this makes it much easier to get your hands on important information than the way you’re currently doing it.

Finding the Right Title for Your Company

Although they all serve the same purpose and the vast majority will follow the outline we just went through, there are still enough virtualization software titles out there that it’s worth thinking about what your best option will look like. As with any type of software, you don’t want to invest your money anywhere it’s not going to do the most good.
The good news is that this software has been around long enough that there have been best practices established for how it should work.
First, you want to look for a title that will actually transform the way your mainframe works by leveraging what it can do for every search. This is just a good use of your resources and gives you more bang for your buck as far as your mainframe is concerned. Software that uses it for virtualization purposes is going to work even better than a distribution-based application and won’t cost any more.

However, it’s also going to work a lot better for that price too. A lot of companies also love that this way of carrying out a search is more secure as well. The last thing you want is to pay for a piece of software that’s actually going to leave you worse off.

Secondly, although this may sound complex, you can and should find a solution that keeps things simple. Data integration can be straightforward with the method we just described without any need for redundant processes that would slow down your ability to scale up.

With data virtualization, there is no downside. Furthermore, it’s becoming more and more of a necessity. Companies are going to have to invest in this software as Big Data continues to get bigger.



Mike Miranda writes about enterprise software and covers products offered by software companies like Rocket Software.about topics such as Terminal Emulation, Legacy Modernization, Enterprise Search, Big Data and Enterprise Mobility.


Categories: DBA Blogs

Don’t Settle When It Comes to Enterprise Search Platforms

Kubilay Çilkara - Fri, 2015-09-04 12:32
No company should try to operate for very long without an enterprise search platform. With so many options out there, though, you could be confused about which one is worth a spot in your organization’s budget. Let’s look at two very common workarounds some have tried, and then we will talk about why you must go with a reputable developer when you make your final decision.

Leveraging Web Search Engines

One way a lot of companies have handled their need for an enterprise search platform is to simply use one like Google or some other web engine. On paper, this may seem to make a lot of sense. After all, who doesn’t trust Google? Most of us use it every single day and would be lost without the search engine giant. If you can leverage its power for your company’s needs, that would seem like a no-brainer, right?

Unfortunately, if you try this, you’ll learn the hard way that Google leaves a lot to be desired when it comes to this type of an environment. That’s not to say it won’t work; it just won’t work well and definitely not nearly as well as you want for a search engine working your company’s internal systems. Google and other web search engines are fantastic at what they’re designed to do. They are not designed to work in an enterprise search environment though. For that, you need a true enterprise search platform.

The major problem is that web search engines are all about sheer volume, which makes a lot of sense once you think about it. When you want to find a pizza parlor in your city, you want to know about every single option and then some. Google’s ability to harvest this much information and present it quickly is one of the reasons it’s so popular. Any search engine that can’t deliver this kind of volume is going to alienate users and soon be left on the scrap heap.

What web search engines like Google don’t do well, though, is carry out deep, detail-oriented searches. Again, this makes sense. Google is driven largely by keywords and backlinks. These are “surface searches” that weren’t created to do some of the tasks you need an enterprise search platform form.

Your employees may do some very simple searches, but they probably also have some pretty demanding queries too. Enterprise search platforms are designed to handle these types of searches and drill down to the last detail in any file or piece of data they come across. If your employees aren’t able to do these types of searches on a regular basis, the search software you invested in will be a waste of money. Worse, it could land you with all kinds of other problems because your people will think they’re doing the best possible search they can and concluding that what they want can’t be found.

Also, don’t forget that your company stores information in different places. Yes, it may all be on the same server, but in the digital world, there are various “silos” that hold onto information. Each silo is its own environment with its own rules. When you try using a web search engine to look through all your company’s silos, what will most likely happen is that it will have to go through one at a time. This is far from ideal, to say the least.

If you have a good number of silos, your employees will most likely give up. They won’t want to walk the search engine from one silo to the next like they’re holding onto the leash of a bloodhound. The whole point of a search engine is that it’s supposed to cut down on the exhaustive amount of “manual” work you’d otherwise have to do to find the data you need.

Silos aren’t all the same, so you want a search program that can go in and out of the type you have without requiring the employee to reconfigure their query.


Open Source Software


Another very popular method of acquiring enterprise search software is to go with an open source title. Once again, on paper, this seems like a very logical route to take. For one thing, the software is free. You can’t beat that price, especially when it comes to an enterprise-level platform. This usually seems like an unbeatable value for small- and medium-sized businesses that don’t have a lot of leeway where their budget is concerned.

That’s just one benefit that proponents of open source search engines tout though. There’s also the fact that you can modify the software as you see fit. This gives the user the ability to basically mold the code into a result that will fit their company’s needs like a glove.

There are a couple problems though. The first is that you get what you pay for. If your open source software doesn’t deliver, you have no one to complain to. You can always do your research to find a title that others have given positive reviews to. At the end of the day, though, don’t expect much in the way of support. There are plenty of forums to go through to find free advice, but that’s not the type of thing most professionals want to wade through.

When you go with a true, professional version, your employees will never be far from help if something goes wrong. Most companies these days have email and phone lines you can use, but many also have chat boxes you can open up on their website. None of these will be available for your company if you go with open source software.

Also, you can definitely modify the search engine software, but this isn’t necessarily unique to open-source platforms. Professional search platforms can be modified a number of ways, allowing the user to streamline their software and fine-tune the result so they get relevant results again and again.
This type of architecture, known as a pipeline, is becoming more and more the standard in this industry. Enterprise platforms come with all kinds of different search features, but that can also be a problem if they start getting in the way of one another. To ensure there are never too many cooks in the kitchen, pipeline architecture is used to line them all up, one in front of the other. By doing so, you’ll have a much easier time getting the search results you want, especially because you can just reconfigure these features as you see fit whenever you like.


Ongoing Updates Are Yours


One very important aspect of professional enterprise search platforms that is worth pointing out is that most developers are constantly putting out updates for their product. This is the same thing web search engines do, of course. Google, Yahoo and Bing all release upgrades constantly. The difference, however, is that enterprise platforms get upgrades that are specific to their purposes.
While there are updates for open source software, expect sporadic results. The developer of your favourite title could give up and go on to another project, leaving you to look for someone else to continue creating great updates.

If you have a skilled developer who is familiar with open source search engines on your team, this may be an attractive option. Still, most will find this route is just too risky. Most of us also don’t have that kind of developer on staff and it wouldn’t be worth it to hire someone on specifically for this reason (it’d be much more affordable to just buy professional software). Also, remember that, even if you do have this kind of talent within your ranks, you’ll soon become completely beholden to them if you start trusting them with this kind of job. Having someone who is completely responsible for your search engine being able to work and not having someone else on staff who can offer support or replace them is not a good idea.


Scalability Is a Given

Every company understands how important scalability is. This is especially true when it comes to software though. The scalability of a program can really make or break its value. Either it will turn into a costly mistake that greatly holds your business back or it will become the type of agile asset you actually take for granted, it’s so helpful.

Open source platforms are only as scalable as their code allows, so if the person who first made it didn’t have your company’s needs in mind, you’ll be in trouble. Even if they did, you could run into a problem where you find out that scaling up actually reveals some issues you hadn’t encountered before. This is the exact kind of event you want to avoid at all costs.

Now that you realize the importance of going with a reputable developer, your next step is picking which one to choose. You definitely won’t lack for options these days, so just take your time to ensure you go with the best one for your business.

Mike Miranda writes about enterprise software and covers products offered by software companies like www.rocketsoftware.com about topics such as Terminal Emulation, Legacy Modernization, Enterprise Search, Big Data and Enterprise Mobility.
Categories: DBA Blogs

Measuring Tuxedo Queuing in the PeopleSoft Application Server

David Kurtz - Fri, 2015-09-04 09:13

Why Should I Care About Queuing?Queuing in the application server is usually an indicator of a performance problem, rather than a problem in its own right.  Requests will back up on the inbound queue because the application server cannot process them as fast as they arrive.  This is usually seen on the APPQ which is serviced by the PSAPPSRV process, but applies to other server processes too.  Common causes include (but are not limited to):
  • Poor performance of either SQL on the database or PeopleCode executed within the application server is extending service duration
  • The application server domain is undersized for the load.  Increasing the number of application server domains or application server process may be appropriate.  However, before increasing the number of server process it is necessary to ensure that the physical server has sufficient memory and CPU to support the domain (if the application server CPU is overloaded then requests move from the Tuxedo queues to the operating system run queue).
  • The application server has too many server processes per queue causing contention in the systems calls that enqueue and dequeue requests to and from IPC queue structure.  A queue with more than 8-10 application server processes can exhibit this contention.  There will be a queue of inbound requests, but not all the server processes will be non-idle.
When user service requests spend time queuing in the application server, that time is part of the users' response time.  Application server queuing is generally to be avoided (although it may be the least worst alternative). 
What you do about queuing depends on the circumstances, but it is something that you do want to know about.
3 Ways to Measure Application Server QueuingThere are a number of ways to detect queuing in Tuxedo
  • Direct measurement of the Tuxedo domain using the tmadmin command-line interface.  A long time ago I wrote a shell script tuxmon.sh.  It periodically runs the printqueue and printserver commands on an application server and extracts comma separated data to a flat that can then be loaded into a database.  It would have to be configured for each domain in a system.
  • Direct Measurement with PeopleSoft Performance Monitor (PPM).  Events 301 and 302 simulate the printqueue and printserver commands.  However, event 301 only works from PT8.54 (and at the time of writing I am working on a PT8.53 system).  Even then, the measurements would only be taken once per event cycle, which defaults to every 5 minutes.  I wouldn't recommend increasing the sample frequency, so this will only ever be quite a coarse measurement.
  • Indirect Measurement from sampled PPM transactions.  Although includes time spent on the return queue and to unpack the Tuxedo message.  This technique is what the rest of this article is about.
Indirectly Measuring Application Server Queuing from Transactional DataEvery PIA and Portal request includes a Jolt call made by the PeopleSoft servlet to the domain.  The Jolt call is instrumented in PPM as transaction 115.  Various layers in the application server are instrumented in PPM, and the highest point is transaction 400 which where the service enters the PeopleSoft application server code.  Transaction 400 is always the immediate child of transaction 115.  The difference in the duration of these transactions is the duration of the following operations:
  • Transmit the message across the network from the web server to the JSH.  There is a persistent TCP socket connection.
  • To enqueue the message on the APPQ queue (including writing the message to disk if it cannot fit on the queue).
  •  Time spent in the queue
  • To dequeue the message from the queue (including reading the message back from disk it was written there).
  • To unpack the Tuxedo message and pass the information to the service function
  • And then repeat the process for the return message back to the web server via the JSH queue (which is not shown  in tmadmin)
I am going make an assumption that the majority of the time is spent by message waiting in the inbound queue and that time spent on the other activities is negligible.  This is not strictly true, but is good enough for practical purposes.  Any error means that I will tend to overestimate queuing.
Some simple arithmetic can convert this duration into an average queue length. A queue length of n means that n requests are waiting in the queue.  Each second there are n seconds of queue time.  So the number of seconds per second of queue time is the same as the queue length. 
I can take all the sampled transactions in a given time period and aggregate the time spent between transactions 115 and 400.  I must multiply it by the sampling ratio, and then divide it by the duration of the time period for which I am aggregating it.  That gives me the average queue length for that period.
This query aggregates queue time across all application server domains in each system.  It would be easy to examine a specific application server, web server or time period.
WITH c AS (
SELECT B.DBNAME, b.pm_sampling_rate
, TRUNC(c115.pm_agent_Strt_dttm,'mi') pm_agent_dttm
, A115.PM_DOMAIN_NAME web_domain_name
, SUBSTR(A400.PM_HOST_PORT,1,INSTR(A400.PM_HOST_PORT,':')-1) PM_tux_HOST
, SUBSTR(A400.PM_HOST_PORT,INSTR(A400.PM_HOST_PORT,':')+1) PM_tux_PORT
, A400.PM_DOMAIN_NAME tux_domain_name
, (C115.pm_trans_duration-C400.pm_trans_duration)/1000 qtime
FROM PSPMAGENT A115 /*Web server details*/
, PSPMAGENT A400 /*Application server details*/
, PSPMSYSDEFN B
, PSPMTRANSHIST C115 /*Jolt transaction*/
, PSPMTRANSHIST C400 /*Tuxedo transaction*/
WHERE A115.PM_SYSTEMID = B.PM_SYSTEMID
AND A115.PM_AGENT_INACTIVE = 'N'
AND C115.PM_AGENTID = A115.PM_AGENTID
AND C115.PM_TRANS_DEFN_SET=1
AND C115.PM_TRANS_DEFN_ID=115
AND C115.pm_trans_status = '1' /*valid transaction only*/
--
AND A400.PM_SYSTEMID = B.PM_SYSTEMID
AND A400.PM_AGENT_INACTIVE = 'N'
AND C400.PM_AGENTID = A400.PM_AGENTID
AND C400.PM_TRANS_DEFN_SET=1
AND C400.PM_TRANS_DEFN_ID=400
AND C400.pm_trans_status = '1' /*valid transaction only*/
--
AND C115.PM_INSTANCE_ID = C400.PM_PARENT_INST_ID /*parent-child relationship*/
AND C115.pm_trans_duration >= C400.pm_trans_duration
), x as (
SELECT dbname, pm_agent_dttm
, AVG(qtime) avg_qtime
, MAX(qtime) max_qtime
, c.pm_sampling_rate*sum(qtime)/60 avg_qlen
, c.pm_sampling_rate*count(*) num_services
GROUP BY dbname, pm_agent_dttm, pm_sampling_rate
)
SELECT * FROM x
ORDER BY dbname, pm_agent_dttm
  • Transactions are aggregated per minute, so the queue time is divided by 60 at the end of the calculation because we are measuring time in seconds.
Then the results from the query can be charted in excel (see http://www.go-faster.co.uk/scripts.htm#awr_wait.xls). This chart was taken from a real system undergoing a performance load test, and we could see


Is this calculation and assumption reasonable?The best way to validate this approach would be to measure queuing directly using tmadmin.  I could also try this on a PT8.54 system where event 301 will report the queuing.  This will have to wait for a future opportunity.
However, I can compare queuing with the number of busy application servers at reported by PPM event 302 for the CRM database.  Around 16:28 queuing all but disappears.  We can see that there were a few idle application servers which is consistent with the queue being cleared.  Later the queuing comes back, and most of the application servers are busy again.  So it looks reasonable.
Application Server Activity

Schema On Demand and Extensibility

Dylan's BI Notes - Fri, 2015-09-04 01:44
Today I see a quite impressive demo in the Global Big Data Conference. AtScale provides a BI metadata tool for data stored in Hadoop. At first, I thought that this is just another BI tool that access Hadoop via Hive like what we have in OBIEE.  I heard that that the SQL performance for BI […]
Categories: BI & Warehousing

Oracle Priority Support Infogram for 03-SEP-2015

Oracle Infogram - Thu, 2015-09-03 17:58

RDBMS



SQL


SQL Developer


Scripting Oracle

node-oracledb 1.1.0 is on NPM (Node.js add-on for Oracle Database), from Scripting and Oracle: Christopher Jones.

MySQL

MySQL Enterprise Monitor 2.3.21 has been released, from MySQL Enterprise Tools Blog.

Solaris


SOA

Top tweets SOA Partner Community – August 2015, from SOA & BPM Partner Community Blog.

Java



Hyperion

Patch Set Update: Hyperion Strategic Finance 11.1.2.3.507, from Business Analytics – Proactive Support.

From the same source:


EBS

From the Oracle E-Business Suite Support blog:






From the Oracle E-Business Suite Technology blog:


…and Finally




node-oracledb 1.1.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Thu, 2015-09-03 17:24

Version 1.1 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM

This is a stabilization release, with one improvement to the behavior of the local connection pool. The add-on now checks whether pool.release() should automatically drop sessions from the connection pool. This is triggered by conditions where the connection is deemed to have become unusable. A subsequent pool.getConnection() will, of course, create a new, replacement session if the pool needs to grow.

Immediately as we were about to release, we identified an issue with lobPrefetchSize. Instead of delaying the release, we have temporarily made setting this attribute a no-op.

The changes in this release are:

  • Enhanced pool.release() to drop the session if it is known to be unusable, allowing a new session to be created.

  • Optimized query memory allocation to account for different database-to-client character set expansions.

  • Fixed build warnings on Windows with VS 2015.

  • Fixed truncation issue while fetching numbers as strings.

  • Fixed AIX-specific failures with queries and RETURNING INTO clauses.

  • Fixed a crash with NULL or uninitialized REF CURSOR OUT bind variables.

  • Fixed potential memory leak when connecting throws an error.

  • Added a check to throw an error sooner when a CURSOR type is used for IN or IN OUT binds. (Support is pending).

  • Temporarily disabled setting lobPrefetchSize

Issues and questions about node-oracledb can be posted on GitHub or OTN. We need your input to help us prioritize work on the add-on. Drop us a line!

Installation instructions are here.

Pages

Subscribe to Oracle FAQ aggregator