Skip navigation.

Feed aggregator

Closer look at the SOA 12c Feature: Oracle Managed File Transfer

The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application...

We share our skills to maximize your revenue!
Categories: DBA Blogs

UKOUG 2014 Elections

Doug Burns - Sat, 2014-08-16 18:18
I noticed from Debra Lilley's blog post that there are some UKOUG elections at the moment, with voting closing on 1st September 2014.

Although not an active member or supporter of UKOUG any more (at least partly because I'm based in Singapore!), I've had a pretty long association with the user group and a lot of my friends have been involved over the years, so I still take an interest in what's going on there. Even more so this time, because I know two of the candidates pretty well.

Carl Dudley needs no introduction to anyone who has been remotely close to the UK or European OUG scene down the years and is an old mate who has put in a world of time to UKOUG over the years and, as a techie, has always tried to ensure that it remains relevant to all areas of the membership.

Pauline Drummond, on the other hand, will be largely unknown to most of the OUG community as I think she's only been attending events over the past few years. (I maybe be wrong about that as my memory isn't what it was for some reason ;-)) I know Pauline pretty well, though, as she was a manager at Standard Life when I worked there on contracts for several years before moving down to London, including being my direct manager for the last contract there. President Elect seems a pretty senior role within UKOUG but if Pauline applies the same boundless energy and enthusiasm that she always did in the office then I can see her being great at it. She makes me tired just thinking about all of the volunteering and organisation and sport and work stuff she gets through and is very dedicated and focussed to working with others to get things done, which strikes me as just what you need from a president of a user group.

For a change it's not one of my techie mates I'm suggesting would be good for the role of President because it is a role that needs to respect and appeal to the entire membership and the other entities that UKOUG has to deal with, not least Oracle, so you need someone with a broad corporate view. Pauline is an appropriate choice in this case, although I can't help hoping that she doesn't antagonise potential conference presenters as UKOUG seems to have done over recent times!

Regardless, I always hope for the best for UKOUG and my various mates who put a power of work into their volunteering and presenting roles, so hopefully some new voices will be a step in the right direction ....

Blame It On The Drugs

Floyd Teter - Sat, 2014-08-16 17:21
Bronchitis.  I catch it a lot.  Rotten experience.  It's like an invisible elephant is sitting on your chest.  And the drugs are mind-numbing.  Got it now.  Shivering under a blanket in 90 degree weather.  But, it'll pass.  And, in the meantime, if I write something weird...well, let's blame it on the drugs, OK?

Had a chat with a dear old friend this week.  Middle-manager for a Fortune 500 corporation.  Big Oracle customer.  Lots of excitement brewing in his neck of the woods over all the money they'll save moving to "the cloud".  I thought it would be interesting to explore this further, so we did some very rough calculations on the back of a napkin.  Over the long run, those savings went out the window.  Have to admit, I knew how the conversation would turn out.  And I didn't mean to rain on his parade. Blame it on the drugs.

Big companies don't move to the cloud for long-term savings.  They move to increase agility in the face of rapidly-changing markets.  They move in order to refocus internal resources on profit centers rather than cost centers.  They move in order to complement existing systems without causing huge operational upset.  Smaller companies also move to cloud because the financial barriers to entry are lower - less of an upfront cost to get the same tools the big enterprises are using.  But long-term dollar-for-dollar savings...yeah, those numbers don't seem to play out.

So we wrapped up the conversation on cost savings with the tried-and-true "well, they've already made the decision that it will save us money, so we're moving ahead."  So I let that slide and we moved on to his excitement in learning something new (this will be his first cloud project).  So I asked the question:  "What kind of cloud?  Private, hosted, SaaS, hybrid...what are ya'all doing?"

Crickets.  Nothing.  Silence.  Now, I didn't mean to throw the guy another curveball.  I mean, he's my friend.  Compassion has to play in here somewhere, right?  But it happened.  Silence...maybe with a little edge of frustration.  Sorry.  Blame it on the drugs.

I get a little nervous when customers announce a commitment to "going to the cloud" without really understanding the benefits they can expect or how they plan to achieve those benefits.  It's putting the cart before the horse and wondering why things don't move forward.  Just makes no sense to me.

Don't get me wrong.  I think many enterprises have much to gain from considering a cloud approach for their enterprise IT.  I just think they should understand the basic concepts and know why they're taking the leap before they jump.  Different enterprises will come to different conclusions.

But I see it more and more as time goes by...people buying into the hype without really knowing why.  Then again, maybe it's my perspective that's off?  If so, blame it on the drugs.

ASM Commands : 1 -- Adding and Using a new DiskGroup for RAC

Hemant K Chitale - Sat, 2014-08-16 10:22
In 11gR2 Grid Infrastructure and RAC

On node1, I discover and add a disk to ASM.  NFS "devices" asmdisk.1 to asmdisk.6 are present as ASM Disks. asmdisk.7 has been added on NFS mount point /data1. (Disks asmdisk.3 to asmdisk.6 are on /data2)

I start on node1 in my Cluster

[root@node1 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:42:02 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> show parameter asm_diskstring

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring string /crs/*, /data1/*, /data2/*, /f
ra/*
SQL> !ls -l /data1/asm*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 16 23:33 /data1/asmdisk.7

SQL> create diskgroup DATA3 disk '/data1/asmdisk.7';
create diskgroup DATA3 disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only
1


SQL> create diskgroup DATA3 external redundancy disk '/data1/asmdisk.7';

Diskgroup created.

SQL>
SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
5 DATA3 1953

SQL>

I now have a new DiskGroup using External Redundancy with a single disk.  Is it visible at node2 ?

[root@node2 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:47:45 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
0 DATA3 0

SQL>

Why is the size not visible yet ?  Because, although the CREATE from node1 had also MOUNTed the Disk Group, it hasn't been mounted on node2 yet.

SQL> alter diskgroup DATA3 mount;

Diskgroup altered.

SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
5 DATA3 1953

SQL>

Can I confirm the underlying disk ?

SQL> select group_number, disk_number, header_status, state, total_mb
2 from v$asm_disk
3 where group_number = 5;

GROUP_NUMBER DISK_NUMBER HEADER_STATU STATE TOTAL_MB
------------ ----------- ------------ -------- ----------
5 0 MEMBER NORMAL 1953

SQL>


What happens when I create a tablespace/datafile in this DiskGroup, from the instance on node1 ?

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ su - oracle
Password:
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:08:31 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create tablespace NEW_TBS datafile '+DATA3';
create tablespace NEW_TBS datafile '+DATA3'
*
ERROR at line 1:
ORA-01119: error in creating database file '+DATA3'
ORA-15045: ASM file name '+DATA3' is not in reference form
ORA-17502: ksfdcre:5 Failed to create file +DATA3
ORA-15081: failed to submit an I/O operation to a disk


SQL>

Why do I get this error ? I could create a DiskGroup on the ASM Disk but I couldn't add a datafile ?  Let me check the permissions.

SQL> !sh
sh-3.2$ cd /data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.7
sh-3.2$ su grid
Password:
sh-3.2$ pwd
/data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$ chmod 775 asmdisk.7
sh-3.2$ ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$

The oinstall group that is used by "oracle" did not have write permissions. Let me go back to Oracle now after having granted the permissions.

sh-3.2$ exit
exit
sh-3.2$ exit
exit

SQL> l
1* create tablespace NEW_TBS datafile '+DATA3'
SQL> /

Tablespace created.

SQL>

The CREATE TABLESPACE has succeeded.  I can verify the datafile and the ASM file from node2 now.

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:17:19 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, file_number, bytes/1048576, type, redundancy
2 from v$asm_file
3 where group_number=5;

GROUP_NUMBER FILE_NUMBER BYTES/1048576
------------ ----------- -------------
TYPE REDUND
---------------------------------------------------------------- ------
5 256 100.007813
DATAFILE UNPROT


SQL>
SQL> exit
suDisconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$
-sh-3.2$ su - oracle
Password:
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:19:34 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select file_name, bytes/1048576 from dba_data_files
2 where tablespace_name = 'NEW_TBS';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
100


SQL>

Now, I have the new DataFile visible in ASM and the Database on the New DiskGroup.
.
.
.

Categories: DBA Blogs

Webcast - Oracle Database In-Memory Option

Next to the recent announcement by Larry Ellison on the Future of the Database, we are happy to share this exclusive series of live webcasts from Oracle Database Product Management, where you can...

We share our skills to maximize your revenue!
Categories: DBA Blogs

PeopleTools 8.54 Upgrade now Available

Jim Marion - Fri, 2014-08-15 23:38

Today Matthew Haavisto of the PeopleTools strategy team announced that the PeopleTools 8.54 upgrade is now available. Visit the PeopleSoft Technology Blog to learn more.

PeopleTools 8.54 Upgrade Now Available

PeopleSoft Technology Blog - Fri, 2014-08-15 16:36
We recently announced that PeopleTools 8.54 is generally available.  Now we are happy to announce that PeopleTools 8.54 Upgrade is also available for customers upgrading to 8.54 from earlier releases.  This documentation home page provides a wealth of information on upgrading to this important release.

All Access Pass to Oracle Support

Joshua Solomin - Fri, 2014-08-15 14:05
Untitled Document

GPIcon

Looking for tips, recommendations and resources to help you keep your Oracle applications and systems running at peak performance? Want to find out how to get more out of your Oracle Premier Support coverage?

More than 500 experts from across Services and Support will be on hand at Oracle OpenWorld to answer your questions and share best practices for adopting and optimizing Oracle technology.

  • Find out what Oracle experts know about the best tools, tips and resources for supporting and upgrading Oracle technology. Attend one of our “Best Practices” sessions.
  • Stop by the Oracle Support Stars Bar to talk with support experts. Open daily @ Moscone West, Exhibition hall 3161.
  • See Oracle support tools in action at one of our demos.

View the schedule of all of our Oracle Premier Support activities at Oracle OpenWorld for more information.

See you there!

In-memory limitation

Jonathan Lewis - Fri, 2014-08-15 13:51

I’ve been struggling to find time to have any interaction with the Oracle community for the last couple of months – partly due to workload, partly due to family matters and (okay, I’ll admit it) I really did have a few days’ holiday this month. So making my comeback with a bang – here’s a quick comment about the 12.1.0.2 in-memory feature, and how it didn’t quite live up to my expectation; but it’s also a comment about assumptions, tests, and inventiveness.

One of the 12.1.0.2 manuals tells us that the optimizer can combine the in-memory columnar storage mechanism with the “traditional” row store mechanisms – unfortunately it turned out that this didn’t mean quite what I had hoped; I had expected too much of the first release. Here’s a quick demo of what doesn’t happen, what I wanted to happen, and how I made it happen, starting with a simple definition (note – this is running 12.1.02 and the inmemory_size parameter has been set to enable the feature):


create table t1 nologging
as
select	*
from	all_objects
where	rownum <= 50000
;

alter table t1 inmemory
no inmemory (object_id, object_name)
inmemory memcompress for query low (object_type)
-- all other columns implicitly inmemory default
;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

begin
	dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1');
end;
/

rem
rem	Needs select on v$_im_column_level granted
rem

select
	table_name,
	column_name,
	inmemory_compression
from
	v$im_column_level
where	owner = user
and	table_name = 'T1'
order by
	segment_column_id
;

explain plan for
select
	last_ddl_time, created
from
	t1
where	t1.created > trunc(sysdate)
and	t1.object_type = 'TABLE'
and	t1.subobject_name is not null
;

select * from table(dbms_xplan.display);

All I’ve done at this point is create a table with most of its columns in-memory and a couple excluded from the columnar store. This is modelling a table with a very large number of columns where most queries are targeted at a relatively small subset of the data; I don’t want to have to store EVERY column in-memory in order to get the benefit of the feature, so I’m prepared to trade lower memory usage in general against slower performance for some queries. The query against v$im_column_level shows me which columns are in-memory, and how they are stored. The call to explain plan and dbms_xplan then shows that a query involving only columns that are declared in-memory could take advantage of the feature. Here’s the resulting execution plan:

-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |     1 |    27 |    73   (9)| 00:00:01 |
|*  1 |  TABLE ACCESS INMEMORY FULL| T1   |     1 |    27 |    73   (9)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - inmemory("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))
       filter("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))

Note that the table access full includes the inmemory keyword; and the predicate section shows the predicates that have taken advantage of in-memory columns. The question is – what happens if I add the object_id column (which I’ve declared as no inmemory) to the select list.  Here’s the resulting plan:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    32 |  1818   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    32 |  1818   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))

There’s simply no sign of an in-memory strategy – it’s just a normal full tablescan (and I didn’t stop with execution plans, of course, I ran other tests with tracing, snapshots of dynamic performance views etc. to check what was actually happening at run-time).

In principle there’s no reason why Oracle couldn’t use the in-memory columns that appear in the where clause to determine the rowids of the rows that I need to select and then visit the rows by rowid but (at present) the optimizer doesn’t generate a plan to do that. There’s no reason, though, why we couldn’t try to manipulate the SQL to produce exactly that effect:


explain plan for
select
        /*+ no_eliminate_join(t1b) no_eliminate_join(t1a) */
        t1b.object_id, t1b.last_ddl_time, t1b.created
from
        t1 t1a, t1 t1b
where   t1a.created > trunc(sysdate)
and     t1a.object_type = 'TABLE'
and     t1a.subobject_name is not null
and     t1b.rowid = t1a.rowid
;

select * from table(dbms_xplan.display);

------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    64 |    74   (9)| 00:00:01 |
|   1 |  NESTED LOOPS               |      |     1 |    64 |    74   (9)| 00:00:01 |
|*  2 |   TABLE ACCESS INMEMORY FULL| T1   |     1 |    31 |    73   (9)| 00:00:01 |
|   3 |   TABLE ACCESS BY USER ROWID| T1   |     1 |    33 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - inmemory("T1A"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1A"."OBJECT_TYPE"='TABLE' AND "T1A"."CREATED">TRUNC(SYSDATE@!))
       filter("T1A"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1A"."OBJECT_TYPE"='TABLE' AND "T1A"."CREATED">TRUNC(SYSDATE@!))

I’ve joined the table to itself by rowid, hinting to stop the optimizer from getting too clever and eliminating the join. In the join I’ve ensured that one reference to the table can be met completely from the in-memory columns, isolating the no inmemory columns to the second reference to the table. It is significant that the in-memory tablescan is vastly lower in cost than the traditional tablescan – and there will be occasions when this difference (combined with the knowledge that the target is a relatively small number of rows) means that this is a very sensible strategy. Note – the hints I’ve used happen to be sufficient to demonstrate method but I’d be much more thorough in a production system (possibly using an SQL baseline to fix the execution plan).

Of course, this method is just another example of the “visit a table twice to improve the efficiency” strategy that I wrote about a long time ago; and it’s this particular variant of the strategy that allows you to think of the in-memory columnar option as an implementation of OLTP bitmap indexes.


Oracle Database 12c In-Memory Feature – Part V. You Can’t Use It If It’s Not “Enabled.” Not Being Able To Use A Feature Is An Important “Feature.”

Kevin Closson - Fri, 2014-08-15 13:04

This is part 5 in a series: Part I, Part II, Part III, Part IV, Part V.

Synopsis

This blog post is the last word on the matter.

Enabled?  It’s About Usage!

You don’t get charged for Oracle feature usage unless you use the feature. So why does Oracle inconsistently use the word enabled when we care about usage? If enabled precedes usage then enabled is a sanctified term. Please read on…

It’s All About Getting The Last Word? No, It’s About Taking Care Of Customers.

On August 6, 2014  Oracle shared their last word and official statement on the matter of bug-ridden tracking of the Oracle Database 12c  In-Memory feature usage in a quote to the press at CBR. I’ll paraphrase first and then quote the article. Here is what I hear when I read the words of Oracle’s spokesman:

Yeah, my bad, we have a bug. The defective code erroneously tracks feature usage for an Enterprise Edition additional cost option priced at $23,000 per processor core. Don’t worry. When we track this particular feature usage we’ll ignore it should you be audited. You have our spoken word that we’ll just shine this one on. Here, let me trade a few confusing words about usage without using the word enabled or disabled since those are taboo.

My paraphrase probably draws a more serene picture than the visions of tip-toeing and side-stepping conjured up by the following words I’ll quote from the CBR article. Bear in mind the fact that the bug spoken of in the quote is 19308780–a bug, by the way, that is not readable by maintenance contract holders. Now I’ll quote the article:

Recording that the In-Memory option is in use in this case is a bug and we will fix it in the first patchset update coming in October.

Yes, we knew it was a bug. I merely had to do the hard work of getting Oracle to acknowledge it. The article continued with the following quote. Please ignore the fact that Oracle’s spokesman refers to me common. Focus instead on the fact that throughout parts 1 through 4 in my series I suffered erroneous  feature usage reporting because of a bug (software defect). I quote:

Kevin initially claimed that feature tracking could report In-Memory usage, and therefore impact licensing, without the end-user doing anything. This was and is still not the case. Customer licensing of Oracle Database In-Memory is not impacted by the bug that Maria notes in her blog. When an end-user explicitly undertakes actions to set the INMEMORY attribute on a table but the In-Memory column store has not been allocated (by setting the inmemory_size parameter to a non zero value), the bug results in feature tracking incorrectly reporting In-Memory ‘in use’. However as no column store has been allocated, the feature is not in use and therefore there is no licensing impact.

 

Ah yes. The old, “it’s not in use but it reports it’s in use situation.” That’s could have been conveyed in very short sentences…could have.

Since the bug spoken of in the above quote is not visible to contract holders I’m just going to let you mull over the circular logic.  This whole situation could be a lot simpler if Oracle would either a) make a bug description visible to contract holders so customers know what is broken and how to test whether it got fixed when the patch is eventually applied and/or b) add this defect to MOS 1309070.1 which is a bug that tracks all the other bugs in feature usage reporting. Yes, indeed, there are other bugs of this sort with other features. All software has bugs.

Last Word On The Matter

My last word on the matter has to do with the fact that the feature cannot be unlinked. It is a very expensive–and very useful, important feature. As I pointed out in Part II the feature cannot be absolutely disabled at the executable level as is the case for other high cost options like Real Application Clusters and Partitioning.  I think Oracle is trying to tell us it is impossible computer science to make it an unlinkable feature–at least that’s how I interpret the following words in a blog post at Oracle.com:

Oracle Database In-Memory is not a bolt on technology to the Oracle Database. It has been seamlessly integrated into the core of the database as a new component of the Shared Global Area (SGA). When the Oracle Database is installed, Oracle Database In-Memory is installed. They are one and the same. You can’t unlink it or choose not to install it.

Now maybe this is not saying there is no way to code the feature as unlinkable. Maybe it’s saying the choice was made to not make it unlinkable. I don’t know. If, however, we are to believe that the mere fact the feature uses the SGA makes  it some sort of atomic-level symbiotic parasite, well, that argument doesn’t  hold water. Indeed, Real Application Clusters is massively integrated with the SGA. Ever heard of Cache Fusion? With Cache Fusion data blocks get shuttled from one SGA to another across hosts in a cluster! Real Application Clusters is unlinkable–that’s unthinkable!

 

What Is Unlinkable Anyway

There might be folks that don’t know what we mean when we say a feature is unlinkable. This doesn’t mean all the code for the feature is yanked out of the binary. It simply means that a single–or perhaps a few–binary objects are linked into the Oracle executable that enables the feature. If unlinked there is absolutely no way to use the feature–as is the case with, for instance, Real Application Clusters.

And not being able to use the feature is an important feature!

So let’s ponder the insurmountable computer science that must surely be involved in implementing the In-Memory Column Store feature as unlinkable.

Oracle has told us the INMEMORY_SIZE initialization parameter is the on/off  button for the feature. That means there is a single, central on/off button that is, indeed, able to be manipulated even by the user. Can you imagine how difficult it must be to implement a global variable–even a simple boolean–that get’s linked in and checked when one boots the database? Not hard to grasp. What if the variable had a silly name like inmemory_deactivated. What if the feature activation module–let’s call it inmem.o–had inmemory_deactived=TRUE but an alternate module called  inmemON.o had inmemory_deactivated=FALSE. In much the same way we relink Real Application Clusters, the link scripts manipulate the file name so that the default (with feature deactivated) gets replaced with the activated module–only if the user wants the possibility of using the feature. How would all this deep, dark, complex code come together? Well, when the database instance is booted inmemory_deactivated is evaluated and regardless of the user’s setting of INMEMORY_SIZE the In-Memory feature is really, truly, disabled–and most importantly not usable. No possibility for confusion. Would that be better than a game of Licensed-Feature Usage Prevention Twister(tm)?

1966_Twister_Cover

Intensely Deep Engineering Difficulty

Now, imagine that. We didn’t even have to use the back of a cocktail napkin to draw out a solution to the mysteries behind how utterly unlinkable the In-Memory Database feature must surely be. We simply a)  drew upon our understanding of other SGA-integrated features like Real Application Clusters and b) recalled how unlinking works for other features and c) drew upon our basic level understanding of the C programming language vis a vis global variables and object linking.

Let me summarize all that: There is a single user-modifiable boot-time parameter that disables In-Memory Database as per Oracle’s blog and spokesman assertions. Um, that’s a pretty simple focal point to make the feature unlinkable.

Summary

Yes, Oracle could implement a method for making the In-Memory Column Store feature an unlinkable option just like they did for Real Application Clusters. I can only imagine why they chose not to (visions of USD $23,000 per processor core).


Filed under: oracle

Smoothing the Transition – The New Smart View 11.1.2.1.102 for Microsoft Office and OBIEE

Rittman Mead Consulting - Fri, 2014-08-15 12:24

Introductions

There’s a good chance that, if you’re reading this, you likely perform some reporting, analytics, data stewardship role or probably some combination of all three. And be it for a large corporation or a small company, there are likely standards and practices that pertain to how the above jobs are performed on a day to day basis; not easily changed and perpetually validated by big budgets and long careers. It is equally likely that deeply ingrained within these reporting practices lies some moderate to heavy implementation of Excel. It wasn’t long ago that I found myself utilizing the spreadsheet program on a daily basis and for hours upon hours at a time.

What this essentially amounted to:

  • Pulling down large amounts of data from our department’s data model using large SQL queries that themselves could take most of the day to elucidate, let alone waiting on the query to yield results, which could easily warrant a bathroom break, a phone call, or if you were feeling adventurous, catching up on email.
  • Validating your results
  • Exporting to Excel (key step here!)Massaging and formatting your data by implementing innumerable and often unwieldy functions that deserved their own time slot on your schedule for the day to figure out
  • Proofing your analysis so that it got to management in ship shape
  • Hoping that an analyst from another department who utilized the same metric on their report and who would be at the same meeting actually coincided with yours

                   

Fast forward a bit and I’m sitting here, writing this blog as a sort of proverbial white flag in the great battle between Excel and the behemoth that is OBIEE. And just what is this white flag? Why, it’s Oracle’s most recent iteration of Smart View, which provides expanded functionality and support for the Microsoft Office Suite of programs. Namely, its golden boy, Excel. That’s right, Excel, the darling of office staff everywhere, the program upon which empires rise and fall. In paraphrasing a quote from www.cfo.com, some 64% of public and private companies still use Excel and other “manual” solutions to perform their finance functions. So, in the world of the spreadsheet, when does it makes sense to cross that blurry line from cell to subject area? Smart View now makes answering that question much easier. It seems that they’ve really gotten a grasp on the formatting shortcomings of the last version and made up for it in spades. Or, so at least they claim.

The Test Run – OBIEE to Excel 

The example below illustrates a simple import via Smart View. I generated a dashboard in Answers which mimics that of an Excel design I found online. Thank the good folks over at www.chandoo.org for their excellent skills in Excel dashboarding and for providing plenty of great examples. The dashboard contains a table with a selection of KPI’s that the user may then choose to sort on via a View Selector (each view has been sorted on a different KPI and is on a different Compound Layout). Upon selecting a KPI, the analysis will then display the Top 10 products by the KPI selected. In addition, the table contains conditional formatting which simply alerts users to the variance between different KPI’s and their targets. Lastly, there is a scatter plot view which displays our Product dimension as seen through the lens of Revenue and Quantity. Per the most recent Oracle documentation, we shouldn’t have any trouble including the current selections of a dashboard prompt either. Let’s see how it performs when we move it over to Excel.

 “OBIEE report and page prompts are fully supported as part of the import process. Dashboards can be imported through Oracle Smart View on a per page basis or the entire dashboard. Prompts are applied at the current state of the logged in user. Future releases of the product will support dashboard prompts directly through Microsoft Office.”

SVB 2

The Results

And there you have it! Excel displays our table and graph views as per the most recent selection from the Dashboard prompt. But wait! Our conditional formatting seems to be missing and to prove this, this is even the case when exported directly from the analysis view as an Excel workbook.

SVB 3

Conditional Formatting

For our second scenario, let’s see how Excel handles a simpler, heat map style conditional formatting. I’ve made a simple table on our dashboard that measures Revenue, Quantity Sold, and the Average Order in $. I set up conditional formatting around the Average Order measure to see how Excel handles importing the color scheme for the currently selected Time parameters on the dashboard.

Contrastingly, we see that Smart View has preserved a simpler, Heatmap style of conditional formatting when imported from OBIEE through Smart View. So, perhaps it is Excel’s lack of corresponding graphic in the previous example that has caused the migration snafu? OBIEE doesn’t even seem to render our arrow graphics as per the documentation.

“Oracle BI Customizations and View Standards – The Import of Oracle BI content can leverage the customizations and view standards used within an OBIEE environment. All view designed modifications such as conditional formatting, background colors or data configuration is automatically translated to the Microsoft Office environment.”

SVB 4

SVB 5

 

Excel to OBIEE

Let’s see what the latest edition of Smart View offers when moving an analysis from Excel to OBIEE.
Because we weren’t able to import our full table view, why don’t we construct it using the View Designer? The interface looks clean and provides an intuitive approach to producing basic Answers views. Accessing our subject area, I simply selected the columns that matched those on our Answers analysis. After clicking ‘OK’, sorting on our Revenue column from largest to smallest and doing a little deleting, we have a pseudo ‘Top 10’ analysis by Revenue. Given the aesthetic attributes of our Answers analysis, lets see how we’re going to replicate this in Excel.

 

SVB 6

SVB 8

SVB 7

After selecting the table, we can navigate to the Design tab under ‘Table Tools’ and select an alternating Grey scheme which gives us the ‘Enable Alternate Styling’ design quality. Now lets add some formulas and conditional formatting that will give us our Calculated column equivalents. We can insert two rows, one between Revenue and Target, and between Qty and Target, to make room for conditional formatting and Excel’s Icon Sets feature. We then create a simple formula that subtracts Revenue and Quantity from their respective targets in the column between the two, assign conditional formatting and voila! Excel even has a check box that lets you show the arrow only.

 

SVB 9

SVB 10

From Excel, we can select Publish View to deposit our analysis into our Shared Folder. The results indicate a sort of ‘two way street’ between Smart View and Excel and vice versa. Neither totally supports the formatting capabilities of the other, as if to say Smart View is giving ground with every new release. In this blog, we’ve taken a look at how Smart View handles some mildly complex conditional formatting and what it takes to replicate this feature in native Excel. In a user environment where reports are flying back and forth between the two platforms, Smart View definitely makes sense, however it might be advisable to simply deliver the minimum of what is needed and let an end user make any formatting based modifications. After all, who would want to do all that work only to have it lost in translation?

Categories: BI & Warehousing

Best of OTN - Week of August 10th

OTN TechBlog - Fri, 2014-08-15 11:12

Brief public service announcement before we get into the OTN community best of content for the week.... Four Bands. Three Epic Nights. Join Oracle for three evenings of entertainment and fun, all during Oracle OpenWorld and JavaOne, September 28-October 2, San Francisco. Learn More

Architect Community

Any discussion of the best of OTN must include the OTN ArchBeat Podcast. Consistently among the top 3 most popular Oracle podcasts, Archbeat focuses on real conversations with community members. Normally I pick the topics and the guest panelists for each program, but now you have a chance to take over that role and become a Guest Producer. In that role you'll pick the discussion topic and the panelists, while I do the all of the grunt work, allowing you to bask in the glory

Want to know how to become an OTN ArchBeat Podcast Guest Producer? You'll find the details here: Yes, you can take over the OTN ArchBeat Podcast!

And here are two examples of OTN ArchBeat Podcasts produced by community members:

-- OTN Architect Community Manager Bob Rhubart

Database Community

OTN DBA/DEV Watercooler Blog - Did You Say "JSON Support" in Oracle 12.1.0.2?.

-- OTN Database Community Manager Laura Ramsey

Java Community

The Java Source Blog - walkmod : A Tool to Apply Coding Conventions .

Friday Funny: I was worried the #NSA might be spying on me Thanks, @pacohope.

-- OTN Java Community Manager Tori Weildt

Systems Community

The OTN Systems Community HomePage- Find Great Resources for System Admins and Developers.

-- OTN Systems Community Manager Rick Ramsey

D2L raises $85 million but growth claims defy logic

Michael Feldstein - Fri, 2014-08-15 09:57

Yesterday D2L announced a second round of investment, this time raising $85 million (a mix of debt and equity) to go with their $80 million round two years ago (see EDUKWEST for a useful roundup of news and article links). While raising $165 million is an impressive feat, does this funding give us new information on the LMS market?

First, here are the claims by D2L as part of this round of financing, from EdSurge:

The deal comes on the heels of what the company calls “a year of record growth in the higher education, K-12 and corporate markets.” John Baker, founder and CEO, says the company currently serves 1,100 institutions and 15 million learners–up from 850 and 10 million, respectively, at this time last year. The company also recently opened offices in Latin America, Asia Pacific and Europe.

That’s a 29% growth in the number of institutions and a 50% growth in the number of learners in just one year. Quite impressive if accurate.

Yet the company went through a significant round of layoffs in late 2013 that let go more than 7% of its workforce, and according to both LinkedIn data and company statements they have had no significant growth in number of employees over the past year. According to the EdSurge article, the company does plan to use the new money to hire more staff [emphasis added].

This time, the company says it will play it cool. “There are no planned acquisitions at this stage,” Baker tells EdSurge. “At this point, we’re primarily focused on building out our learning platform to support our clients and thousands of integration partners.” To do so, the company will grow its team of 783 full-time employees. “We are actively looking for dozens of new positions; over 60 in R&D alone,” shares Baker.

Note this slide from John Baker’s FUSION keynote one year ago:

John Baker keynote slide from FUSION conference July 2013

John Baker keynote slide from FUSION conference July 2013

If you take the information above – 800+ employees last year and 783 today - at face value, D2L has actually dropped in employee headcount. Does it make sense that a company can grow 50% in terms of learners without growing company employment, especially coming between two massive funding rounds?

Secondarily, what about the statement of “thousands of integration partners”? D2L is claiming to have more than twice as many integration partners as they do actual clients.

The other issue is market share. It is clear that D2L is planning to grow in corporate (10% of their business according to WSJ), K-12, and international higher ed markets; however, their largest business is still US higher ed. And here they have actually shown signs of no real growth, and for community colleges even dropping market share.

For the first time in an LMS market survey that I am aware of, Desire2Learn has actually lost market share. In fact, Desire2Learn is now lower than both Moodle and Canvas for community colleges according to this survey. This is a topic worth exploring further, especially in relation to last year’s layoffs.

Edutechnica ran the numbers for US higher education in October 2013.[1]

Edutechnica data from Oct 13 for US institutions with more than 2,000 FTE

Edutechnica data from Oct 13 for US institutions with more than 2,000 FTE

Edutechnica ran the numbers again for the end of June for 2,000 FTE and above (to allow an apples-to-apples comparison with Oct 2013), but they have not yet published the results. George did agree to share preliminary information with me, and D2L came out with 225 institutions and 2,084,089 enrollments.[2] The Edutechnica numbers leads to an increase of 3% in number of US institutions and 2% in enrollment (number of learners) over the past 10 months. If D2L has grown its total number of learners by 50% over the past year, it would make sense that we would see very different numbers for their largest market.

In another interview with local media outlet The Star, CEO John Baker described growth this way:

“We’re seeing very rapid growth in Europe, we’ve seen triple-digit growth in Latin America and Asia Pacific. In terms of new accounts we’re seeing great growth basically everywhere we look,” Baker said. Desire2Learn is prioritizing growth in “key hubs,” including Brazil, Mexico, the U.S. and Singapore, he said.

This raises some questions:

  • They mention growth everywhere they look, including the US. Where is this growth that is not showing up in market data?
  • What percentage of their business – in terms of revenue, customers or learner counts – comes from international markets? The company press releases mention their investments in international hubs but I can find no significant news on new accounts with huge numbers.

D2L did not respond to several requests for comment or clarification for this post.

My intention in this and previous posts is to explain what I am seeing in the market and challenge the marketing claims - education institutions need an accurate understanding of what is happening in the LMS market. It is worth noting that not a single media outlet listed by EDUKWEST or quoted above (WSJ, Reuters, Bloomberg, re/code, edSurge, TheStar) challenged or even questioned D2L’s bold claims. It would help if more media outlets didn’t view their job as paraphrasing press releases.

  1. Edutechnica also ran an update in May 2014, but that used a different criteria of ‘more than 1,000 FTE’.
  2. By the way, think of how useful the Edutechnica data approach is compared to annual surveys, with the ability to adjust variables and update results so quickly.

The post D2L raises $85 million but growth claims defy logic appeared first on e-Literate.

The Secret Project Emerges

Oracle AppsLab - Fri, 2014-08-15 07:56

Noel (@noelportugal) and Raymond have been working on a secret project. Here’s the latest:

eihgjagiThanks to AUX colleague and Friend of the ‘Lab, Rob Hernandez, for the 3D modeling.

So now you know why Noel bought the slap bands, but what goes in the case?

appslab-slap-band-1

If you’ve been watching, you might know already.

10516589_828102840568104_7123336379861752905_n

LightBlue Beans from Punch Through Design

Those are LightBlue Beans from Punch Through Design (@punchthrough), h/t @colin_k.

Stay tuned.Possibly Related Posts:

Log Buffer #384, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-15 07:32

This Log Buffer Edition starts with some great posts from Oracle arena, then passes through the world of SQL Server, and stops at the MySQL field.

Oracle:

OAG/OES Integration for Web API Security: skin and guts by Andre Correa

Showing Foreign Key Names in your Data Modeler Diagrams

walkmod : A Tool to Apply Coding Conventions

Oracle VM Virtual Appliances for E-Business Suite 12.1.3 Now Available

RMAN Catalog requires Enterprise Edition (EE) since Oracle Database 12.1.0.2

SQL Server:

Restore Gene : Automating SQL Server Database Restores

With a hybrid cloud, can you get the freedom and flexibility of a public cloud with the security and bandwidth of a private cloud?

A clear understanding of SQL Data Types and domains is a fundamental requirement for the Database Developer, but it is not elementary.

Automating SQL Server Agent Notification

Adding Custom Reports to SQL Server Management Studio

MySQL:

The Road to MySQL 5.6 — A DBA Perspective

Virtual servers for MySQL are popular but are they the answer? Should we be containing our instances instead.

Jeremy Cole recently blogged about the feature SET GLOBAL sql_log_bin.

Which SQL queries take all the time? Using MaxScale to answer that age old question.

SBR vs RBR when using On Duplicate Key Update for High Availability

Categories: DBA Blogs

How to beat workday blues?

Vattekkat Babu - Fri, 2014-08-15 06:37

Let us face it - all of us feel like having achieved or done very little after spending a long day away from family. Then you look back and find that you could've spent some of that time with family at least!

I've been observing my work habits a lot and I think I have found out something that works for me.

I am summarizing these as a NOT-TODO list of 3 items. I am a software engineer by profession and by passion.

Has this worked for me? Absolutely much better than when I was not following these rules.

UnifiedPush Server: Docker, WildFly and another Beta release!

Matthias Wessendorf - Fri, 2014-08-15 04:07

Today we are announcing the second beta release of our 1.0.0 version. This release contains several improvements

  • WildFly 8.x support
  • PostgreSQL fix
  • Scheduler component for deleting analytics older than 30 days
  • Improvements on the AdminUI
  • Documentation

The complete list of included items are avialble on our JIRA instance

With the release of the server we also released new versions of the senders for Java and Node.js!

Docker

The team is extremely excited about the work that Docktor Bruno Oliveira did on our new Docker images:

Check them out!

Documentation

As mentioned above, the documentation for the UnifiedPush Server has been reorganized, including an all new guide on how to use the UnifiedPush Server.

Demos

To get easily started using the UnifiedPush Server we have a bunch of demos, supporting various client platforms:

  • Android
  • Apache Cordova (with jQuery and Angular/Ionic)
  • iOS

The simple HelloWorld examples are located here. Some more advanced examples, including a Picketlink secured JAX-RS application, as well as a Fabric8 based Proxy, are available here.

Docker

Bruno Oliveira did Docker images for the Quickstart as well:

Feedback

We hope you enjoy the bits and we do appreciate your feedback! Swing by on our mailing list! We are looking forward to hear from you!

NOTE: the Openshift online offering will be updated w/in the next day or two

Enjoy!


Oracle Priority Service Infogram for 14-AUG-2014

Oracle Infogram - Thu, 2014-08-14 15:41

OpenWorld
Each week leading up to OpenWorld we will be publishing various announcements, schedules, tips, etc. related to the event.
This week:
The Storage Forum at Oracle OpenWorld 2014.
Oracle Support
We’ve reported before on the My Oracle Support Accreditation Series. The My Oracle Support blog lets us know about New Products Added to the series.
RDBMS
From the Oracle Database In-Memory blog: Getting started with Oracle Database In-Memory Part I - Installing & Enabling.
Performance
From the A Wider Viewblog: DAY, AUGUST 11, 2014 Watch Oracle DB Session Activity With The Real-Time Session Sampler.
From the ORACLE DIAGNOSTICIAN: ASH Basics.
ODI
OWB to ODI 12c Migration in action, from the Data Integration blog.
Solaris
From The Observatoryblog: VXLAN in Solaris 11.2.
SPARC
From EnterpriseTech: Oracle Cranks Up The Cores To 32 With Sparc M7 Chip .
Security
IT-Security (Part 6): WebLogic Server and Authorization, from The Cattle Crew.
MAF
From Shay Shmeltzer's Weblog: Required Field Validation in Oracle MAF.
SOA
SOA Transformation through SOA Upgrade, from the SOA & BPM Partner Community Blog.
ADF and BPM
From the Dreamix Group: The Ultimate Guide to Separating BPM from ADF.
From the Waslley Souza Blog: Communication between Task Flows using Task Flow Parameters.
From Andrejus Baranovskis Blog: ADF Thematic Map in ADF 12c (12.1.3).
IOUG
Always good to take an occasional look at upcoming events from IOUG.
…and Finally
Some of the trends of today, based on buzz words:
A year of tech industry hype in a single graph, from The Verge.
And some of the attempted buzzes of the past that went buzzzzzz….THUD!

22 Of The Most Epic Product Fails in History, from Business Insider.


OGG-00212, what a frustrating error.

DBASolved - Thu, 2014-08-14 14:50

Normally, I don’t mind errors when I’m working with Oracle GoldenGate (OGG); I actually like getting errors, keeps me on my toes and gives me something to solve.  Clients on the other hand do not like errors…LOL.  Solving errors in OGG is normally pretty straight forward with the help of the documentation.  Although today I can almost disagree with the docs.

Today, as I’ve been working on implementing a solution with OGG 11.1.x on the source side and OGG 11.2.x on the target side, this error came up as I was trying to start the OGG 11.1.x Extracts:

OGG-00212  Invalid option for MAP: PMP_GROUP=@GETENV(“GGENVIRONMENT”.

OGG-00212  Invalid option for MAP:  TOKENS(.

In looking around in the OGG documentation and other resources (online and offline).  Some errors are self-explanatory; not in the case of OGG-00212.  Looking up the error in OGG 11.1.x docs was pointless; didn’t exist.  When I finally found the error in the docs for OGG 11.2.x, the docs say:

OGG-00212: Invalid option for [0]:{1}
Cause: The parameter could not be parsed because the specified option is invalid.
Action: Fix the syntax

Now that the documentation has stated the obvious, how is the error actually corrected?  There is no easy way to correct this error because it is syntax related.  In the case that I’m having the error was being thrown due to needing additional spaces in the TABLE mapping.  Silly I know, but true.  

Keep in mind, to fix an OGG-00212 error, especially with OGG 11.1.x or older, remember to add spaces where you many not think one is needed.

Example (causes the error):

TABLE <schema>.SETTINGS,TOKENS( #opshb_info() );

Example (fixed the error):

TABLE <schema>.SETTINGS, TOKENS ( #opshb_info() );

Notice the space between the common (,) and TOKEN. Also between TOKENS and the open parentheses (().  Those simple changes fixed the OGG-00212 error I was getting.

Hope this helps!

Enjoy!

http://about.me/dbasolved

 

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

PeopleTools 8.54 – New Functionality, New Browser Releases

PeopleSoft Technology Blog - Thu, 2014-08-14 14:24

PeopleTools 8.54 has numerous enhancements to offer, but it does have some browser requirements that go along with it as well.  As customers look at the much improved user interface along with other enhancements being delivered in PeopleTools 8.54, they will look to take advantage of the release and plan their upgrade.  As they plan, diligent customers will review certifications and notice that PeopleTools 8.54 certifies only newer browser releases.  For Internet Explorer, IE 9 is the absolute minimum for the Classic UI, and IE 11 (or higher) is required to take advantage of the new Fluid UI.  As they continue to prepare for the upgrade, some may discover that they have older applications that have hard requirements for a less than current IE release.  Can you say IE 8?  I have come across a couple scenarios already where customers have that one critical application that hasn’t been updated in 5+ years and requires the use of IE 8; it can’t work with IE9 or above.  Another situation I’ve seen is where a customer isn’t scheduled to roll out IE 9 (or above) to their user base prior to their scheduled go live date.

As they evaluate options, some customers facing this situation are able to implement a dual browser environment using Chrome or Firefox for PeopleSoft, and an older IE browser required for antiquated applications.  Other customers have begun to ask what PeopleTools functionality might simply not work if they decide to move forward with an uncertified browser environment.  Since we test our software with certified browser combinations, we sometimes aren’t sure which features might be partially supported by older browsers, and which simply won’t work.

The problem is that older versions of Internet Explorer simply did not contain the functionality that current browsers have.  While those versions render HTML and CSS content, they often do so in very specific, non-standard fashion.  A state of the art application like PeopleSoft relies on rich functionality implemented in the latest versions of the HTML and CSS standards.  PeopleSoft utilizes AJAX and the latest accessibility suggestions as outlined in the Web Accessible Initiative – Accessible Rich Internet Applications Suite (WAI-ARIA).  IE 8 and previous browsers were not designed to support this functionality adequately, and cannot deliver the performance that modern browsers do.

We expect that the following areas would be problematic if using IE 8 or (gulp) something older with the Classic UI.  Of course, Fluid UI functionality will not work.

  • Layout issues
  • Nonfunctional breadcrumbs
  • Accessibility problems
  • Performance issues
  • Problematic charts and graphs
  • Mobile Application Platform (MAP)

Note that there are almost certainly other issues that would arise from the use of outdated and uncertified browsers.

As you make plans to roll out the best release of PeopleTools yet, we STRONGLY recommend that you use only certified environment components.  We test and certify environments for a reason - so that we can find as many issues as possible, before you do.  Should you find a bug we missed, our Support organization stands ready to assist you in obtaining a resolution in your certified environment.  We want your roll out to be as smooth as possible - take advantage of our testing and give your users the best experience available.