Feed aggregator

How is SaaS Product Management different from traditional Product Management?

Arvind Jain - Tue, 2010-02-23 00:27
As Enterprise Architects we are inclined to always question that how a particular technical architecture is going to benefit business strategy of my company. In the same thoughts I had a debate with my colleague that Product Management for a SaaS or Cloud based product is very different than a traditional approach to product management.

As SOA Architect I can see some of the challenges with reuse or creating global services. So here are some of the key differences between traditional product management vs SaaS product management, that I can think of. Please comment your thoughts or elaborate more.

In Saas product management you have to worry about all these additional things,

1) Data Management of customer data (Backup, recovery, export, migration)
2) Additional security around Access & Authorization
3) You earn your money every day and every moment, so it is not a traditional sell once and forget till the next new producty is available. If you fail customers may not and will not renew the subscription. So you have to develop SaaS with some stickiness feature like creating a website with lowest bounce rate and higher CTR (click through rate). so that there is highest probability of customers renewing.
4) Special considerations for On Demand / Multi Tenacy of the product / solution.
5) Much higer emphasis on Disaster Recovery, Peak Load and High Availablity.
6) One size does not fit all, so how would you provide innovation in cloud? How to empower customers in cloud so that they can maintain their cuttting edge by intelligent customizations.

I am thinking there will be additional issues like Multi Tenant Pricing that will be of concern (based on usage pattern, product differentiation etc.) so please comment your thoughts or elaborate more if you can.

My Interview Published in the Peer-To-Peer Column of Oracle Magazine

Sabdar Syed - Mon, 2010-02-22 13:55
Hello,

This is to share with you all that Oracle Published my interview in the
peer-topeer column of Oracle Magazine (March - April 2010 Edition).





Regards,
Sabdar Syed.
http://sabdarsyed.blogspot.com

Intel's cloud chip and physicalization

Vikas Jain - Mon, 2010-02-22 01:00
Per Intel's CTO Justin Rattner, Intel is working on a single chip cloud computer
  • Parts of the chip will be powered down when not in use
  • First iteration involves a 48 core processor that consumes 25 - 125 watts
  • New term invented "physicalization" which means dedicating one or more cores to a specific application or portion of the application. This is completely opposite to "virtualization" which means running applications on whatever processor resources are available
For complete story, see this Forbes article

Cisco's new urbanism

Vattekkat Babu - Sun, 2010-02-21 00:17

Fast Company's article about Cisco's vision for New Songdo is a fascinating read on smart cities. Perhaps all connected, with all the apartments being able to do lot with network via great functional phones feeding off mobile platforms?

Labelling in Outlook 2003 ala Gmail

Vattekkat Babu - Sun, 2010-02-21 00:01

I found that moving mails into project folders, when the mails are still on an open topic takes too much time. Gmail's "label" idiom really helps in this situation. Turns out we can do that with Outlook 2003 too with some small macro work. First, see my entry on how to put macros and arrange toolbar in Outlook 2003. Then add the following macro to the module. Duplicate the second subroutine to as many as you've categories and then put toolbar entries for each of those. I think you should be able to do with less than 10 categories. At times you may get mails on an old subject, which you can just read and act and archive directly without tracking it.

Sub SetCategory(strCat As String)
   Dim Item As Object
   Dim SelectedItems As Selection
   Set SelectedItems = Outlook.ActiveExplorer.Selection
   For Each Item In SelectedItems
   With Item
    .Categories = strCat
    .Save
    End With
   Next Item
End Sub

Sub SetCategoryAdmin()
    SetCategory ("Admin")
End Sub

Cool but unknown RMAN feature

Jared Still - Fri, 2010-02-19 11:53
Unknown to me anyway until just this week.

Some time ago I read a post about RMAN on Oracle-L that detailed what seemed like a very good idea.

The poster's RMAN scripts were written so that the only connection while making backups was a local one using the control file only for the RMAN repository.
rman target sys/manager nocatalog

After the backups were made, a connection was made to the RMAN catalog and a SYNC command was issued.

The reason for this was that if the catalog was unavailable for some reason, the backups would still succeed, which would not be the case with this command:


rman target sys/manager catalog rman/password@rcat

This week I found out this is not true.

Possibly this is news to no one but me, but I'm sharing anyway. :)

Last week I cloned an apps system and created a new OID database on a server. I remembered to do nearly everything, but I did forget to setup TNS so that the catalog database could be found.

After setting up the backups vie NetBackup, the logs showed that there was an error condition, but the backup obviously succeeded:

archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_294_709899427.dbf recid=232 stamp=710999909
deleted archive log
archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_295_709899427.dbf recid=233 stamp=710999910
Deleted 11 objects


Starting backup at 16-FEB-10
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting full datafile backupset
channel ORA_SBT_TAPE_1: specifying datafile(s) in backupset
including current controlfile in backupset
channel ORA_SBT_TAPE_1: starting piece 1 at 16-FEB-10
channel ORA_SBT_TAPE_1: finished piece 1 at 16-FEB-10
piece handle=OIDDEV_T20100216_ctl_s73_p1_t711086776 comment=API Version 2.0,MMS Version 5.0.0.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:45
Finished backup at 16-FEB-10

Starting Control File and SPFILE Autobackup at 16-FEB-10
piece handle=c-3982952863-20100216-02 comment=API Version 2.0,MMS Version 5.0.0.0
Finished Control File and SPFILE Autobackup at 16-FEB-10

RMAN> RMAN>

Recovery Manager complete.

Script /usr/openv/netbackup/scripts/oiddev/oracle_db_rman.sh
==== ended in error on Tue Feb 16 04:07:59 PST 2010 ====


That seemed rather strange, and it was happening in both of the new databases.
The key to this was to look at the top of the log file, where I found the following:

BACKUP_MODE: lvl_0
BACKUP_TYPE: INCREMENTAL LEVEL=0
ORACLE_SID : oiddev
PWD_SID : oiddev
ORACLE_HOME: /u01/oracle/oas
PATH: /sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin

Recovery Manager: Release 10.1.0.5.0 - Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.


RMAN;
connected to target database: OIDDEV (DBID=3982952863)

RMAN;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-04004: error from recovery catalog database: ORA-12154: TNS:could not resolve the connect identifier specified

RMAN;
Starting backup at 16-FEB-10
using target database controlfile instead of recovery catalogallocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backupset

Notice the line near the bottom of the displayed output?

The one that says "using target database controlfile instead of recovery catalog" ?

RMAN will go ahead with the backup of the database even though the connection to the catalog database failed.  This apparently only works when running in a scripted environment, as when I tried connecting on the command line RMAN would simply exit when the connection to the catalog could not be made.

The RMAN scripts are being run on a linux server in the following format:

$OH/bin/rman target sys/manager catalog rman/password@rcat <<-EOF >> $LOGFILE

rman commands go here

EOF


This was quite interesting to discover, and my be old news to many of you, but it was new to me.

This is not exactly a new feature either - one of the databases being backed up is 9.2.0.6. And of course there is now no need to update the backup scripts.
Categories: DBA Blogs

Oracle SQL Developer on OS X Snow Leopard

Duncan Mein - Fri, 2010-02-19 05:40
I have been using Oracle SQL Developer Data Modeller for a while now within a Windows XP environment. It seems pretty good (albeit a little slow but hey show some an Oracle Java client application that is quick. Oracle Directory Manager?, OWB Design Centre? I shall labour this point no more) and I was looking forward to trying it out on my new 27" iMac.

I promptley downloaded the software from OTN and a quick read of the instructions suggested I need to do no more other than run the datamodeler.sh shell script since I already had Java SE 6 installed.

As it turns out, the datamodeler.sh script in the root location does little more than call another script called datamodeler.sh found in the /datamodeler/bin directory which is the once you actually need to execute to fire up SQL Data Modeler

When this script runs, it prompts you for a the full J2SE file path (which I had no idea where it was) before it will run. After a quick look around google and I came across the command: java_home which when executed like:

cd /usr/libexec
./java_home

prints the full path value that you need to open SQL Data Modeler

e.g. /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home

Now we are armed with the full path needed, opening up SQL Data Modeller from a virgin command window goes like this:

cd Desktop/datamodeler/bin
. ./datamodeler.sh

Oracle SQL Developer Data Modeler
Copyright (c) 1997, 2009, Oracle and/or its affiliates.All rights reserved.

Type the full pathname of a J2SE installation (or Ctrl-C to quit), the path will be stored in ~/jdk.conf
/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home

and hey presto, SQL Data Modeller is up and running.

Once you have pointed the shell script at your J2SE installation, you wont have to do it again.

Now I can finally use Data Modeler on my 27" Screen :)

Unit Testing with SQL Developer

Peter O'Brien - Thu, 2010-02-18 16:03
December's release of SQL Developer 2.1 has a number of new bells and whistles. Two of the main new features are:
  • Data Model Viewer. This is a free, read only, viewer based on Oracle SQL Developer Data Modeler. With this viewer you can open existing data models as well as generate data models based on your database. The generated data model can not be saved though.
  • Unit Testing. Based on the popular xunit Four Phase Test pattern, this feature makes testing of procedures and functions a breeze. Put simply, it allows one to construct a repository of unit tests cases which includes what one would expect for automated testing: setup, execute, assert result, record results, teardown.
The free Data Model Viewer is a nice introduction to the Data Modeler product which is not free. The unit testing framework though really does mean that the quality of code in the database can be asserted and maintained much easier than previously possible. There is a simple Oracle By Example tutorial on the SQL Developer Unit Test feature available at:

http://www.oracle.com/technology/obe/11gr2_db_prod/appdev/sqldev/sqldev_unit_test/sqldev_unit_test_otn.htm

VPD + bad ANYDATA practices can really bite

Charles Schultz - Thu, 2010-02-18 13:20
After several days of intense testing, 4 SRs with Oracle Support (and another with the ERP vendor), and the very helpful information from Maxim Demenko about "out-of-range" date values, I have developed a testcase that demonstrates how using bad ANYDATA practices in the context of VPD can really mess you up.

Some background:
We have an application that recently started to utilize ANYDATA. Unfortunately, the application did not implement validation checks, and the nature of ANYDATA makes table check constraints a near impossibility (I have not found any good ways to go about it). So we (not I, but colleagues) developed VPD rules to validate data. After a month of testing, a tester noticed that we had some really funny dates, ranging from 4290 BC to 5090 BC.

We tried tracing (10046, 10053, 10730), but nothing jumped out at us; except we may have uncovered a new bug, but more on that in a second. We tried using LogMiner, but Oracle Support finally convinced us that LogMiner does not support ANYDATA. :-( Finally we just started shooting in the dark, testing different combinations of rules and data inputs.

We stumbled upon the fact that using CAST to convert ANYDATA into a datatype has bad consequences. In particular, if you try something like cast(some_anydata_column as varchar2(1)) and the column is a DATE, for example, you get a ora-3113/ora-7445 (under 10.2.0.4 + JanPSU2010). The fine folks who had written our RLS policies had used CAST extensively, and the ironic part is that no errors were being generated on the application side. Instead, bad dates were sneaking into the dataset.

After reading the documentation a bit more, I discovered that ANYDATA is an object-oriented object (much to my surprise), and it has member functions. We had a hard time trying to figure out exactly how to use the member functions since one needs to instantiate a member first, and the documentation does not give any examples, let alone explain the usage of "object-oriented" in a relationship database. Finally I stumbled upon using sys.anydata as an instantiation, which seemed to work well for us.

Why did Oracle develope ANYDATA?!? It seems anti-RDBMS. And it makes things messy for us DBA types. As I explained to my colleagues, object-oriented data buckets are great for developers, up until they break. Then they are a pain to figure out.

I still have an outstanding question of exactly how the ANYDATA column overflows into the DATE field and gives us whacked out dates. If any Oracle gurus out there want to chime in, please do so.

Here is the code I used to replicate our issue:

drop user test cascade;
drop user test_no_vpd cascade;

create user test_no_vpd identified by test4#;
grant create session, EXEMPT ACCESS POLICY to test_no_vpd;

create user test identified by test3#;
grant create session, alter session, resource, create any context to test;
grant execute on dbms_rls to test;
connect test/test3#;

CREATE TABLE GORSDAV (
GORSDAV_TABLE_NAME VARCHAR2(30 CHAR) NOT NULL,
GORSDAV_VALUE SYS.ANYDATA NOT NULL,
GORSDAV_ACTIVITY_DATE DATE NOT NULL,
pill_1 number default 1,
pill_2 number default 2,
pill_3 number default 3)
;

insert into gorsdav values ('some_table_1',sys.anydata.convertnumber(1),sysdate,0,0,0);
insert into gorsdav values ('some_table_1',sys.anydata.convertdate(sysdate),sysdate,0,0,0);
insert into gorsdav values ('some_table_1',sys.anydata.convertvarchar2('Y'),sysdate,1,0,0);
insert into gorsdav values ('some_table_2',sys.anydata.convertvarchar2('Yes'),sysdate,0,0,0);
insert into gorsdav values ('some_table_2',sys.anydata.convertvarchar2('Y'),sysdate,0,0,3);
insert into gorsdav values ('some_table_2',sys.anydata.convertvarchar2('No'),sysdate,0,0,0);
insert into gorsdav values ('some_table_3',sys.anydata.convertvarchar2('MaybeSo'),sysdate,0,0,0);

commit;

-- Using FGAC example from http://www.orafusion.com/art_fgac.htm

-- A dummy procedure to satisfy the CREATE CONTEXT command; does not actually do anything

PROMPT Create Application Role Procedure
create or replace procedure
set_testapp_role(p_user varchar2 default sys_context('userenv', 'session_user')) is
v_ctx varchar2(16) := 'testapp_ctx';
begin
dbms_session.set_context(v_ctx,'rolename','APP_OWNER');
end;
/


PROMPT Create context
create or replace context testapp_ctx using set_testapp_role;


-- This is just a mock up test; I am not concerned about real-life roles or security,
-- thus I am returning the same predicate no matter who the user is

PROMPT Create security function
create or replace function testapp_security_function (p_schema varchar2, p_object varchar2)
return varchar2 is
begin
return '(sys.anydata.accessvarchar2(gorsdav_value) = ''Y'' and pill_1 = 1) or pill_1 <> 1';
end;
/


PROMPT Create RLS Table Policy
declare
begin
DBMS_RLS.ADD_POLICY (
object_schema => 'TEST',
object_name => 'GORSDAV',
policy_name => 'TESTAPP_POLICY',
function_schema => 'TEST',
policy_function => 'TESTAPP_SECURITY_FUNCTION',
statement_types => 'SELECT,UPDATE,INSERT,DELETE',
update_check => TRUE,
enable => TRUE,
static_policy => FALSE);
end;
/

PROMPT Inserting a control row into the table to show the date and insert are fine
insert into gorsdav values ('some_table_4',sys.anydata.convertvarchar2('123456789'),sysdate,0,0,0);
commit;

PROMPT Selecting data from table - should return eight rows with no errors
select * from gorsdav;

-- The following function uses CAST to get the varchar2 data; however, a majority of the
-- data is larger than the CAST target, thus we get an error. Even if we use varchar2(200),
-- some datatypes are DATE and NUMBER.

PROMPT Create "bad" security function
create or replace function testapp_security_function (p_schema varchar2, p_object varchar2)
return varchar2 is
begin
return '((cast(gorsdav_value as varchar2(1)) = ''Y'' and pill_1 = 1) or pill_1 <> 1)';
end;
/

PROMPT Inserting into table - this will work with no problems.
insert into gorsdav values ('some_table_4',sys.anydata.convertvarchar2('Y'),sysdate,0,2,0);

commit;


PROMPT Inserting into table - this will complete successfully, but will insert a "bad" date
insert into gorsdav values ('some_table_4',sys.anydata.convertvarchar2('123456789'),sysdate,0,0,0);

commit;

-- PROMPT Selecting data from table - should hang for about 10 seconds and kick you out with
-- PROMPT ORA-3113 and ORA-7445 in the alert.log
-- select * from gorsdav;

grant select on test.gorsdav to test_no_vpd;

PROMPT Connecting as a non-VPD user (exempt access policy)
connect test_no_vpd/test4#
select * from test.gorsdav;



Funny YouTube Video Featuring Oracle Data Mining

Marcos Campos - Thu, 2010-02-18 12:08
Maybe I am too much of a data mining geek, but I found the video below to be funny. It also talks about a super cool feature ODM introduced in 11.2: the ability of scoring data mining models at the disk controller level in Exadata. This is a significant performance booster. It also makes it feasible to produce actionable insights from massive amounts of data extremely fast. More on this on a Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com1
Categories: BI & Warehousing

Oracle Data Mining Races with America's Cup

Marcos Campos - Thu, 2010-02-18 11:57
For those that have not heard the BMW Oracle Racing team won the America's Cup sailing an incredible new boat. What even those that have been following the news on the race do not know is that Oracle Data Mining helped the performance team tune the boat. I participated helping with that problem and it was a very hard one: Imagine standing under an avalanche of data - 2500 variables, 10 times Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com0
Categories: BI & Warehousing

New release of Lucene Domain Index based on Lucene 2.9.1

Marcelo Ochoa - Tue, 2010-02-16 06:39
We have released a new Lucene Domain Index (LDI) based on 2.9.1 core base.
Since Lucene 2.9.1 is on the road from several months ago why this new release of LDI came delayed?
The answer is because We added parallel processing support.
This new feature is enabled by a new LDI parameter ParallelDegree, by setting this parameter with a value greater than 1 LDI creates multiples Lucene directory storages to process insertions in parallel.
Let see a practical example:
create index source_big_lidx on test_source_big(text)
indextype is lucene.luceneindex parameters('BatchCount:250;ParallelDegree:2;SyncMode:OnLine;LogLevel:INFO;AutoTuneMemory:true;PerFieldAnalyzer:line(org.apache.lucene.analysis.KeywordAnalyzer),TEXT(org.apache.lucene.analysis.SimpleAnalyzer);FormatCols:line(0000);ExtraCols:line "line"');

Above example create a LDI with ParallelDegree equal to 2 and BatchCount equal to 250. Parallel degree is only used when SyncMode is OnLine, future LDI releases will includes parallel operations when SyncMode is Deferred.
Once this DML operation is executed LDI creates three OJVMDirectory Lucene stores, two for parallel index operations and the master store, then batches of 250 rows are enqueued for indexing in parallel mode.
Parallel insert implies a parallel operation (document creation and insertion in a secondary store) and serialized merge in the master store.
Machines with multi-core chips or RAC installations will speed up the LDI index creation/rebuild, obviously when an IO concurrence is the bottleneck there is no performance improvement when ParallelDegree>1. Look at this screenshot on my notebook
As you can see two background processes (AQ processes named ora_j*_test) are running consuming most of the CPU usage, these are LDI operations (insert|merge), also you can see two DB Writer process trying to write in parallel the information that LDI is generating.
A complete list of changes of this new release is at ChangeLog.txt file.
Downloads for 11g and 10g in binary format are SF.net project section download.
Source code, obviously, is available through CVS access.
On line documentation is available in PDF or as Google Document.

Snow Leopard upgrade

Tahiti Views - Mon, 2010-02-15 10:25
I finally upgraded the main iMac to Snow Leopard. For the first time ever, an upgrade actually resulted in more free space, an extra 6 GB worth. The main features that I notice are fairly minor -- the ability to view stacks on the dock using a normal icon instead of the smashed-together icons of the apps in the folder; the ability to have the time announced every 15, 30, or 60 minutes by the John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com2

Customer satisfaction - the Xerox Effect

Nigel Thomas - Fri, 2010-02-12 04:42
Thanks to Martin Widlake for pointing to this gem of a paper from Dennis Adams (pdf), pointing out that an increase in customer satisfaction can lead to an increase in negative feedback, and vice versa. Anyone who has worked in customer support (whether on an internal help desk or for external customers) will have gone through a "why don't they love us, we're doing such a great job for them?" period. This might explain why.

MIT South Asian Alumni Association - MBA Panel Discussion

Arvind Jain - Tue, 2010-02-09 01:42
MIT South Asian Alumni Association had invited me to a panel discussion at the Stanford University campus to share my MBA experience and guide future business school applicants. It was a good debate and most importantly I belive the assosciation is doing a great service to public. More details can be found here

http://alumweb.mit.edu/upload/AS/MBA_event_flyer_26414.pdf

Oracle extends BTM and SOA Mgt through Amberpoint acquisition

Vikas Jain - Mon, 2010-02-08 13:06
Oracle's acquisition of Amberpoint extends it's capabilities around Business Transaction Monitoring (BTM), SOA Management and SOA Governance into it's SOA products offering.

Read the following resources for more info
From the FAQ,
The AmberPoint solution will provide several critical capabilities requested by customers.
• Application Discovery – Automatically discovers components and interactions and ensures visibility of the entire heterogeneous SOA environment
• Application Performance Management – Tracks end-to-end performance and availability
• Business Transaction Management – Ensures reliability of individual business transactions and tracks the progress in real time to pinpoint any issues
• SOA Governance – Provides closed-loop governance by reporting run-time results to design-time governance solutions

UKOUG - Northern Server Tech Day 2010

Lisa Dobson - Mon, 2010-02-08 10:42
The UKOUG is once again running the Northern Server Technology Day.This is the 5th year we have run the event and this year it is taking place on 29th April at the Hilton Hotel, Leeds.This annual event is aimed at DBA’s and Developers in the North of England and delivers a full day of server tech presentations.If anybody is interested in presenting at this event then please contact either myself Lisahttp://www.blogger.com/profile/16434297444320005874noreply@blogger.com1

Integrating REST clients with STS for token exchange

Vikas Jain - Fri, 2010-02-05 17:37
Where REST services demand a particular type of token for access, REST clients can potentially integrate with an STS server to acquire the requisite token, and pass it to the service.

I haven't seen customers yet widely asking for such solutions, but need can arise where companies standardize across the applications on tokens such as SAML for access control which carries not only the username information but also attributes associated with user profile.

In such scenarios, following flow would be applicable
  1. REST client acquires token from the STS server preferably through REST binding of STS, but any other supported binding should also be okay.
  2. Once it receives the token, it adds it to the "Authorization" HTTP header of the REST request.
  3. REST service receives the request, and a security interceptor(agent) picks up the token to check for access validity. The interceptor can optionally assert the identity into the service for identity propagation needs.
I would be interested to know if you run into such scenarios, and looking for products to support it. You can leave blog comments.

Oracle aims to secure future of Sparc, Solaris and Sun hardware

Stephen Booth - Thu, 2010-02-04 11:09
Interesting article on ComputerWeekly.com about how Oracle are looking to secure the future of Sparc, Solaris and Sun Hardware. They don't mention it but I wouldn't be surprised if we see 'Database as an Appliance' coming soon with Oracle Database running on Sun Hardware with management taken out of the hands of the local DBA and automated or made accessible only to Oracle themselves. This Stephen Boothhttps://plus.google.com/107526053475064059763noreply@blogger.com1

RESTful STS

Vikas Jain - Wed, 2010-02-03 14:52
Secure Token Service (STS) typically have a SOAP endpoint with WS-Trust standard profiling the interactions. How about taking the complexity of SOAP away, and adding simplicity of REST interface to the STS? At the end of the day, STS is a token service that applications use to acquire tokens and should be accessible through different types of bindings - SOAP, REST, etc.

What would be the interaction pattern for such RESTful STS?
  1. Clients access RESTful STS using HTTP GET/POST method sending RequestSecurityToken (RST) as part of HTTP message.
  2. RESTful STS sends back the requested token as RequestSecurityTokenResponse (RSTR) in the HTTP response message.
  3. The STS endpoint could be secured similar to any HTTP resource using web access management products such as Oracle Access Manager (OAM) with username/password or certificate credentials.

RESTful STS can lead to wider adoption
Many languages/frameworks (such as Adobe Flex and Silverlight) doesn't support full capabilities of a SOAP stack. But, they support the basic HTTP interactions. Such frameworks could easily plug into a RESTful STS for their token needs.

Applicability of RESTful STS in the cloud
As cloud remains the innovation vehicle for 2010, I try to find applicability of any new concept into the cloud as well.
Today, Google, Amazon, Salesforce of the world provide RESTful APIs for all it's services. If they decide to broker trust using some sort of STS, then it makes perfect sense for them to provide RESTful STS with API keys and OpenId/OAUTH models to access it.


Pages

Subscribe to Oracle FAQ aggregator