You’ve been sold on the whole concept of the multitenant option in Oracle 12c and you are launching full steam ahead. Your first database gets upgraded and converted to a PDB, so you start testing your shell scripts and bang! Broken! Your company uses CRON and shell scripting all over the place and the multitenant architecture has just gone and broken the lot in one fell swoop! I think this will end up being a big shock to many people.
I’ve been talking about this issue with a number of people since the release of Oracle 12c. Brynn Llewellyn did a session on “Self-Provisioning Pluggable Databases Using PL/SQL” at last year’s UKOUG, which covered some of these issues. More recently, I spent some time speaking to Hans Forbrich about this when we were on the OTN Yathra 2014 Tour.
Today, I put down some of my thoughts on the matter in this article.
- Multitenant : Running Scripts Against Container Databases (CDBs) and Pluggable Databases (PDBs) in Oracle Database 12c Release 1 (12.1)
Like most things to do with Oracle 12c, I’m sure my thoughts on the subject will evolve as I keep using it. As my thoughts evolve, so will the article.
Tim…Running scripts in CDBs and PDBs in Oracle Database 12c was first posted on April 19, 2014 at 4:23 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
When I’m asked to talk to academics, the requested subject is usually a version of “What should we know about what’s happening in the actual market/real world?” I then try to figure out what the scholars could stand to hear that they perhaps don’t already know.
In the current case (Berkeley next Tuesday), I’m using the title “Necessary complexity”. I actually mean three different but related things by that, namely:
- No matter how cool an improvement you have in some particular area of technology, it’s not very useful until you add a whole bunch of me-too features and capabilities as well.
- Even beyond that, however, the simple(r) stuff has already been built. Most new opportunities are in the creation of complex integrated stacks, in part because …
- … users are doing ever more complex things.
While everybody on some level already knows all this, I think it bears calling out even so.
I previously encapsulated the first point in the cardinal rules of DBMS development:
Rule 1: Developing a good DBMS requires 5-7 years and tens of millions of dollars.
That’s if things go extremely well.
Rule 2: You aren’t an exception to Rule 1.
- Concurrent workloads benchmarked in the lab are poor predictors of concurrent performance in real life.
- Mixed workload management is harder than you’re assuming it is.
- Those minor edge cases in which your Version 1 product works poorly aren’t minor after all.
My recent post about MongoDB is just one example of same.
Examples of the second point include but are hardly limited to:
- Hadoop and its ecosystem.
- The general trend of supporting multiple data paradigms in one system …
- … sometimes via schema-on-need.
- DBMS vendors’ work to exploit multiple kinds of storage in one system, from Microsoft to MemSQL.
- WibiData and Kiji.
BDAS and Spark make a splendid example as well.
As to the third point:
- In ever more use cases, the essential simplicity of the relational data model is fundamentally obsolete.
- It’s been generally accepted for a couple of years that analytic data topologies often need to be complex.
- Predictive models are getting markedly more complex as well.
Bottom line: Serious software has been built for over 50 years. Very little of it is simple any more.
Hi, this is Eric Maurice.
Oracle just released Security Alert CVE-2014-0160 to address the publicly disclosed ‘Heartbleed’ vulnerability which affects a number of versions of the OpenSSL library. Due to the severity of this vulnerability, and the fact that active exploitation of this vulnerability is reported “in the wild,” Oracle recommends that customers of affected Oracle products apply the necessary patches as soon as they are released by Oracle.
The CVSS Base Score for this vulnerability is 5.0. This relative low score denotes the difficulty in coming up with a system that can rate the severity of all types of vulnerabilities, including the ones that constitute blended threat.
It is easy to exploit vulnerability CVE-2014-0160 with relative impunity as it is remotely exploitable without authentication over the Internet. However a successful exploit can only result in compromising the confidentiality of some of the data contained in the targeted systems. An active exploitation of the bug allows the malicious perpetrator to read the memory of the targeted system on which resides the vulnerable versions of the OpenSSL library. The vulnerability, on its own, does not allow a compromise of the availability (e.g., denial of service attack) or integrity of the targeted system (e.g., deletion of sensitive log files).
Unfortunately, this vulnerability is very serious in that it is contained into a widely used security package, which enables the use of SSL/TLS, and the compromise of that memory can have serious follow-on consequences. According to http://heartblead.com, the compromised data may contain passwords, private keys, and other sensitive information. In some instances, this information could be used by a malicious perpetrator to decrypt private information that was sent months or years ago, or log into systems with stolen identity. As a result, this vulnerability creates very significant risks including unauthorized access to systems with full user rights.
For more information:
The Advisory for Security Alert CVE-2014-0160 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2014-0160-2190703.html
The ‘OpenSSL Security Bug - Heartbleed / CVE-2014-0160’ page on OTN is located at http://www.oracle.com/technetwork/topics/security/opensslheartbleedcve-2014-0160-2188454.html
The ‘Heartbleed’ web site is located at http://www.heartbleed.com. Note that this site is not affiliated with Oracle.
I'm using sample application from my previous post about dynamic ADF BC and new dynamic ADF UI component in ADF 12c - ADF Dynamic ADF BC - Loading Multiple Instances (Nr. 100 in 2013). The same technique as described below can be applied also for design time ADF BC, across different ADF versions. Download sample application, updated for this post - ADFDynamicReportUI_v6.zip.
Default search dialog is case insensitive, you could test it quite easily. Try to search for lower case value, when you know there are upper case matching values - there will be no results:
SQL query is generated with a WHERE clause as it should, it is trying to search for matching values - but there are no such records in DB:
We could solve it with generic class - our custom View Object implementation class. My sample is using dynamic ADF BC, so I register this custom class programmatically with VO instance (typically you could do it through the wizard for design time ADF BC):
As I mentioned above, sample application is using ADF UI dynamic component, however the same works with regular ADF UI also - it doesn't matter:
Here is the main trick how to make search from LOV dialog to be case insensitive. You must override buildViewCriteriaClauses method in View Object implementation class. If current VO instance represents LOV (if you don't want to rely on naming convention, you could create additional View Object implementation class, intended to use only for LOV's), we should invoke setUpperColumns method applied for View Criteria. This converts entire View Criteria clause to use UPPER for both criteria item and bind variable:
Now with automatic conversion to UPPER case, try to execute the same search - you should see results:
View Criteria clause is constructed with UPPER and this is why it allows to perform case insensitive search. Of course, for runtime DB performance optimisation, you need to make sure there is functional index created for searchable columns:
The same works for any number of View Criteria items. Here I search using both attributes:
View Criteria clause contains both attributes and both are changed to use UPPER - this is perfect:
Case insensitive auto completion works as well with the technique described above. Try to type a value, existing in LOV - but with lower case (it_prog):
Such value is located and is automatically changed to use the case as it is originally stored in DB (IT_PROG):
View Criteria clause was constructed with UPPER in the case of auto completion as well as with regular search in LOV dialog:
Interview- Which Type of Virtualization Should I Use? - I routinely ask techies which type of virtualization they'd recommend for which type of job. I seldom get an answer as crystal clear as Brian Bream's.Database Community
Hot: Check out the Oracle Critical Patch Update for April 15, 2014 - Over a hundred patches for Oracle products and technologies...including Oracle Database 12c. Get it here: http://ora.cl/6G6
Got Big Data? Here's a new collection of Technology Deep Dives on the OTN Database Youtube channel, organized in a handy playlist - Subscribe today.
Oracle Support publishes the Oracle Enterprise Manager Bundle Patch Master Note. Updates apply to:
Enterprise Manager for Cloud
Enterprise Manager Base Platform - Version 184.108.40.206.0 and later
Enterprise Manager for Fusion Applications - Version 220.127.116.11.0 and later
Enterprise Manager for Oracle Database - Version 18.104.22.168.0 and later
Enterprise Manager for Fusion Middleware - Version 22.214.171.124.0 and later
Information in this document applies to any platform. Get it here: http://ora.cl/1f8
Friday Funny from OTN Database Community Manager, Laura Ramsey - Famous Oracle ACE Selfie :) Taken at Collaborate 2014.
Video: Board Buffet - IoT, Java and Raspberry Pi - Java expert Vinicius Senger and Oracle engineer Gary Collins, discuss IoT and show a bunch of different types of boards, single board computers, and plug computers.
Free Java Virtual Developer Day - Next month, Oracle will host a Virtual Developer Day covering Java SE 8, Java EE 7 and Java Embedded. The VDD is free to attend, just make sure to register. The complete agenda and the registration details can be found here.
A classic video: How To Design A Good API and Why it Matters by Josh Bloch
Friday Funny - with apologies to experts everywhere
Thought Leadership Webcast Series
Across the Enterprise
Five Steps for Mobilizing Digital Experiences
How Do You Deliver High-Value Moments of Engagement?
web and mobile have become primary channels for engaging with customers
today. To compete effectively, companies need to deliver multiple
digital experiences that are contextually relevant to customers and
valuable for the business—across various channels and on a global scale.
But doing so is a great challenge without the right strategies and
architectures in place.
As the kickoff of the new Digital Business Thought Leadership Series, noted industry analyst Geoffrey Bock investigated what some of Oracle’s customers are already doing, and how they are rapidly mobilizing the capabilities of their enterprise ecosystems.
Join us for a conversation about building your digital roadmap for the engaging enterprise. In this webcast you’ll have an opportunity to learn:
- How leading organizations are extending and mobilizing digital experiences for their customers, partners, and employees
- The key best practices for powering the high-value moments of engagement that deliver business value
- Business opportunities and challenges that exist for enterprise wide mobility to fuel multichannel experiences
Principal, Bock & Company
Senior Product Marketing Director, Oracle WebCenter
Copyright © 2014, Oracle and/or its affiliates.
Everyone “knows” that bitmap indexes are a disaster (compared to B-tree indexes) when it comes to DML. But at an event I spoke at recently someone made the point that they had observed that their data loading operations were faster when the table being loaded had bitmap indexes on it than when it had the equivalent B-tree indexes in place.
There’s a good reason why this can be the case. No prizes for working out what it is – and I’ll supply an answer in a couple of days time. (Hint – it may also be the reason why Oracle doesn’t use bitmap indexes to avoid the “foreign key locking” problem).
The presentation catalog (Web Catalog) stores the content that users create within OBIEE. While the Catalog uses the presentation layer objects, do not confuse the presentation layer within the RPD with the presentation catalog. The presentation catalog includes objects such as folders, shortcuts, filters, KPIs and dashboards. These objects are built using the presentation layer within the RPD.
The difference between RPD and Catalog security is that Repository level restrictions give the most flexibility as they can be either course-grained or fine-grained based on the data. Catalog level restrictions are more course-grained as they are applied to entire subject areas and/or objects.
To access an object in the catalog users must have security and can use either the BI client or web user interface. The BI client for the Web Catalog is installed along with the BI Admin client.Access Control Lists (ACL)
Access Control Lists (ACL) are defined for each object in the catalog. Within the file system the ACLs are stored in the *.ATR files which may be viewed through a HEX editor. A 16-digit binary representation is used similar to UNIX (e.g. 777). There are six different types of permissions for each object:
- Full control
- No Access
In 11g the catalog is located here:
Catalog Permission Reports
From a security perspective, the permission reports that are able to be generated from the Web Catalog client tool are very valuable and can be exported to Excel for further analysis. For example, these reports can provide system generated reports for who can avoid OBIEE security and issue Direct SQL or has rights to Write-Back to the database. The security ACL will report on who has such Administration privileges.
OBIEE Administration Privileges
BI Catalog Client
If you have questions, please contact us at firstname.lastname@example.org
-Michael Miller, CISSP-ISSMPReferences
- OBIEE Security Examined - Webinar and Presentation: OBIEE Security Examined Webinar
- OBIEE Security Examined - Whitepaper: OBIEE Security Examined
I think of myself as a developer, but my current role is in a small team running a small system. And by running, I mean that we are
- 'root' and 'Administrator' on our Linux and Windows servers
- 'oracle / sysdba' on the database side,
- the apex administrator account and the apex workspace administrators,
- the developers and testers,
- the people who set up (and revoke) application users and
- the people on the receiving end of the support email
The advantage of having all those hats, or at least all those passwords, is that when I'm looking at issues, I get to look pretty much EVERYWHERE.
I look at the SSH, FTP and mailserver logs owned by root. The SSH logs generally tell me who logged on where and from where. Some of that is for file transfers (some are SFTP, some are still FTP), some of it is the other members of the team logging on to run jobs. The system sends out lots of mail notifications, and occasionally they don't arrive so I check that log to see that it was sent (and if it may have been too big, or rejected by the gateway).
Also on the server are the Apache logs. We've got these on daily rotate going back a couple of years because it is a small enough system that the logs sizes don't matter. But Apex stuffs most of those field values into the URL as a GET, so they all get logged by Apache. I can get a good idea of what IP address was inquiring about a particular location or order by grepping the logs for the period in question.
I haven't often had the need to look in the Oracle alert logs or dump directories, but they are there if I want to run a trace on some code.
In contracts, I'm often looking at the V$ (and DBA_) views and tables. The database has some audit trail settings so we can track DDL and (some) logons. Most of the database access is via the Apex component, so there's only a connection pool there.
The SELECT ANY TABLE also gives us access to the underlying Apex tables that tell us the 'private' session state of variables, collections etc. (Scott Wesley blogged on this a while back). Oh, and it amazing how many people DON'T log out of an application, but just shut their browser (or computer) down. At least it amazed me.
The apex workspace logs stick around for a couple of weeks too, so they can be handy to see who was looking at which pages (because sometimes email us a screenshot of an error message without telling us how or where it popped up). Luckily error messages are logged in that workspace log.
We have internal application logs too. Emails sent, batch jobs run, people logging on, navigation menu items clicked. And some of our tables include columns with a DEFAULT from SYS_CONTEXT/USERENV (Module, Action, Client Identifier/Info) so we can automatically pick up details when a row is inserted.
All this metadata makes it a lot easier to find the cause of problems. It isn't voyeurism or spying. Honest.
A minimal Oracle Linux install contains a really small set of RPMs but typically not enough for a product to install on and a full/complete install contains way more packages than you need. While a full install is convenient, it also means that the likelihood of having to install an errata for a package is higher and as such the cost of patching and updating/maintaining systems increases.
In an effort to make it as easy as possible, we have created a number of pre-install RPM packages which don't really contain actual programs but they 're more or less dummy packages and a few configuration scripts. They are built around the concept that you have a minimal OL installation (configured to point to a yum repository) and all the RPMs/packages which the specific Oracle product requires to install cleanly and pass the pre-requisites will be dependencies for the pre-install script.
When you install the pre-install RPM, yum will calculate the dependencies, figure out which additional RPMs are needed beyond what's installed, download them and install them. The configuration scripts in the RPM will also set up a number of sysctl options, create the default user, etc. After installation of this pre-install RPM, you can confidently start the Oracle product installer.
We have released a pre-install RPM in the past for the Oracle Database (11g, 12c,..) and Oracle Enterprise Manager 12c agent. And we now also released a similar RPM for E-Business R12.
RDBMS and Performance
Frequently Misused Metrics in Oracle, from The Oracle Alchemist.
Some notes from Tyler Muth on Oracle Database 10.2 De-Supported.
Also from Tyler Muth, a quick posting on Create Bigfile Tablespace – Oracle Managed Files (OMF).
Recovering a standby over the network in 12c, from Martin's Blog.
Data Pump Enhancements in Oracle Database 12c, from the ORACLE-BASE Blog.
From the dbi services blog: Implementing Oracle Database as a Service (DBAAS).
MONITORING ORACLE GOLDEN GATE FROM SQL DEVELOPER, from the DBASOLVED blog.
How to restrict data coming back from a SOAP Call, from Angelo Santagata's Blog.
Oracle Internet Expenses
Oracle Fusion Expenses - Mobile app for R12, from the Oracle Internet Expenses blog.
APEX 5 first peek - Themes & Templates, from grassroots oracle.
A new video introducing EPM Mobile is available on YouTube. You can find this video, and other EPM videos, here: http://www.youtube.com/user/OracleEPMWebcasts
CVE-2013-5211 Input Validation vulnerability in NTP, from the Third Party Vulnerability Resolution Blog.
From the ever-useful LifeHacker: This Tipping Infographic Shows Who Expects Tips, and How Much.
A Guest Post by Heike Lorenz, Director of Global Product Marketing, Policy Automation
Making complex decisions EASY by automating your service policies allows
your organization to efficiently ensure the correct decisions are being
applied to the right people.
Like the hit British TV series Little Britain suggests, when “Computer Says No”, you can be left wondering why?
It’s not easy to automate your Customer Service polices, let alone do it in a way that is transparent, consistent and cost effective. Especially if you are working within environments where markets conditions and regulations change frequently. Get it wrong and you are left with compliance problems and customer complaints—and that’s a costly outcome!
So while you may not be striving to change the decision from a “NO” to a “YES” for your customer, you should be looking to get to that answer quicker for them, with a complete explanation as to why it’s a “NO”, have the traceability of what answer was given at that time, have the peace of mind that the answer is accurate, AND do it all at the lowest cost to your business. Simple right?!
So how do you achieve this? There are three core areas of consideration: 1) Centralize & Automate, 2) Personalize & Optimize, and 3) Analyze & Adapt.
1) Centralize & Automate
One method is to grab all of your policy documents, throw them at a team of costly developers to move into a database, code the logic around them, and hope what comes out is maintainable, accurate and accessible to the right audiences. Or, maybe not.
A simpler method is to take your original policy documents and import them into a policy automation tool that will help a business user through a step-by-step process to model the rules. Once developed, they can be tested, modified, published and updated within a few clicks. The result is a solution that can empower your agents with dynamic interviewing tools, and your customers with a self-service approach, across channels, in any language, and on any device.
But that is only part of the whole picture.
2) Personalize & Optimize
A simple decision process could be easily managed by one or two published FAQs, whereas a complex decision process requires you to take into account many personal attributes about that specific customer—and by definition those attributes can’t be applied through static views. Getting a customer to repeat information, or worse not even taking into consideration critical information that is provided within the interaction and personalizes the response, is a fast way to get them to abandon the process, or worse leave you!
You must ensure that your automated policies can be optimized to dynamically adapt to every customer’s unique situation and attributes—be that channel, device, location, language, or other more personal characteristics that are shared prior and during the interaction. After all, each answer should be uniquely theirs, explaining in detail why the decision was reached, with everything taken into consideration.
3) Analyze & Adapt
The saying “data rich and insight poor” is one that often fits with the word “compliance”—businesses can easily be more focused on capturing volumes of data for compliance, and less on making the data actionable. The flip side of that is “data poor” when businesses must scramble to get the data needed to ensure compliance, as an afterthought! And we all know that having insight without ability for timely action is a missed opportunity to improve, avoid, or sustain compliance.
As your policies change, or you introduce new policies, often the requirements to capture data can change too. Adapting to environmental or organizational changes requires you to gather the right data to deliver the right insight for action. The right tools are required in order to apply that insight in a timely, measurable, and effective manner. The right volume of accessible data is also needed to remain compliant with regulatory business or industry Customer Service standards during periodic audits. So you must have a solution that can adapt with scale, demand, change, and archive—a solution that can actually automate your service policies for insight, compliance, and agility—making it easy.
Putting all these pieces together lets you truly automate the nurturing of trusted relationships with your customers during complex decision-making processes, through transparent and personalized interactions. Giving your business confidence that in even the most demanding markets, you are remaining compliant, in a cost-effective and efficient way.
The Oracle Service Cloud empowers your business to care, take action and succeed in your Web Customer Service initiatives and become a Modern Customer Service organization.
In the next release of the PeopleSoft Interaction Hub (9.1/FP3), we will be deprecating direct Lotus Notes support as an email option. Customers that wish to use Lotus Notes in the future can still use our IMAP support.
If you haven’t talk to me IRL for the past 10 months, then I haven’t pestered you about the wonders of BLE and micro-location. My love affair with BLE (Bluetooth Low Energy) beacons became clear when I heard at WWDC 2013 that Apple was implementing BLE beacon detection in their CoreLocation framework. Apple showed how a small BLE beacon sending a constant signal (UUID + Major + Minor *) at a given interval could help for what is now known as micro-location.
At the time I just happened to be experimenting with wifi and bluetooth RSSI to accomplish similar results. I was prototyping a device that sniffed MAC addresses from surrounding devices and trigger certain interactions based on our enterprise software (CRM, HCM, etc). You can find more on this topic in the white paper “How the Internet of Things Will Change the User Experience Status Quo” (sorry but its not free) that I presented last year at the FiCloud conference.
The BLE beacon or iBeacon proved to be a better solution after all, given its user opt-in nature and low power consumption capabilities. Since then I have been prototyping different mobile apps using this technology. The latest of these is a Google Glass + iBeacon ( github link: GlassBeacon) example. I’m claiming to be the first to do this implementation since the ability to integrate BLE on Glass just became available on April 15 2014 :).
Stay tuned for more BLE beacon goodness. We will be showing more enterprise related use cases with this technology in the future.
*UUID: a unique id to distinguish your beacons. Major: used to group related sets of beacons. Minor: used to identify a beacon within a groupPossibly Related Posts:
- 2014 AT&T Developer Summit Hackathon
- Google Glass Details Emerge
- First 3 Days as a Glass Explorer (Prologue)
- Rapid Prototyping Tools
- Messing around with Glass and Fusion CRM for Kscope 13
Replication of data is always a fun thing to look at; What is replicating?! Discussions around, How do I get data from server/database A to server/database B or even to server/database C are valid questions and are often asked by management. Often the simple (knee jerk) answer is, just set it up and start replicating. Although Oracle GoldenGate may be simple (for some architectures) to meet the demands of management and the task at hand, problems will arise with the data being replicated.
when problems arise, the need to identify and resolve the replication issue becomes a critical and time consuming task. Oracle GoldenGate provides a few utility to help in diagnosing and resolving replication issues. One such utility is the LogDump utility. The LogDump utility is used to read the local and remote trail files that are used to support the continuous extraction and replication of transaction changes within the database.
Knowing what trail files are used for is part of the battle when troubleshooting replication issues with Oracle GoldenGate. How do we use LogDump to read these trail files? What are we looking for or at in a trail file to understand what is gong on? To answer these questions, we need to start the LogDump utility.
To start LogDump, we just need to be in the OGG_HOME and run the LogDump command. The below code set shows you how to run LogDump.
[oracle@oel oggcore_1]$ pwd /oracle/app/product/12.1.2/oggcore_1 [oracle@oel oggcore_1]$ ./logdump Oracle GoldenGate Log File Dump Utility for Oracle Version 126.96.36.199.0 17185003 17451407 Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved. Logdump 22 >
Note: Your LogDump session should start at 1 not 22 (Logdump 22). LogDump remembers session info until you log out of the server.
Once LogDump has been started, we need to open a trail file and setup how we want the information to be displayed. Commands for LogDump can be displayed by using the “help” command. In the following code block, we see that we are opening a local trail (lt) file and setting a few environment options.
Note: Trail files (local and remote) normally are pre-fixed with two (2) letters followed by a six ( 6 ) digit string. In new environments trail files will start with (prefix)000000 (lt000000 or rt000000).
Logdump 15 >open ./dirdat/lt000000 Current LogTrail is /oracle/app/product/12.1.2/oggcore_1/dirdat/lt000000 Logdump 16 >ghdr on Logdump 17 >detail on Logdump 18 >detail data Logdump 19 >usertoken on Logdump 20 >
The “help” command inside of LogDump provides more options. The options that we are using in this example are:
- ghdr on = toggle header display on | off
- detail on = toggle detailed data display (on | off | data)
- detail data = toggle detailed data display (on | off | data) (repeated this just to make sure)
- usertoken on = show user token information (on | off| detail)
With the LogDump environment set, we can now use the “next (n)” command to see the information in the trail file.
Logdump 20 > n
Once the header output is displayed, we need to understand how to read this information. Image 1 provides us with a quick explanation of each major component within a trial file transaction. We can see the following items for a transaction in trail file (lt000000):
- Header Area: Transaction information
- Data/Time and type of transaction
- Object associated with the transaction
- Image of transaction (before/after)
- Columns associated with the transaction
- Transaction data formatted in Hex
- Length of the record
- ASCII format of the data
- Record position within the trail file (RBA)
At this point, we maybe asking: Why is this important? Understanding the trail files and how to find information within the trail files is an important part of troubleshooting the Oracle GoldenGate environment.
Example: If a replicat abends and we need to start the replicat from a given RBA. Being able to identify the first, next and last RBA in the trail file is helpful in understanding why the abend happened and identifying a starting point to restarting successfully.
In the end, the Oracle GoldenGate environment can be simple yet complex at the same time. Understanding the different components of the environment is very useful and worth the time involved to learn it.
Filed under: Golden Gate
The Oracle Applications User Experience team is delighted to announce that our Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook is available for free.
The Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook
We’re sharing the same user experience design patterns, and their supporting guidance on page types and Oracle ADF components that Oracle uses to build simplified user interfaces (UIs) for the Oracle Sales Cloud and Oracle Human Capital Management (HCM) Cloud, with you so that you can build your own simplified UI solutions.
Design patterns offer big wins for applications builders because they are proven, reusable, and based on Oracle technology. They enable developers, partners, and customers to design and build the best user experiences consistently, shortening the application's development cycle, boosting designer and developer productivity, and lowering the overall time and cost of building a great user experience.
Now, Oracle partners, customers and the Oracle ADF community can share further in the Oracle Applications User Experience science and design expertise that brought the acclaimed simplified UIs to the Cloud and they can build their own UIs, simply and productively too!
Hi, this is Eric Maurice.
In addition to the release of the April 2014 Critical Patch Update, Oracle has also addressed the recently publicly disclosed issues in the Oracle Java Cloud Service. Note that the combination of this announcement with the release of the April 2014 Critical Patch Update is not coincidental or the result of the unfortunate public disclosure of exploit code, but rather the result of the need to coordinate the release of related fixes for our on-premise customers.
Shortly after issues were reported in the Oracle Java Cloud Service, Oracle determined that some of these issues were the result of certain security issues in Oracle products (though not Java SE), which are also licensed for traditional on-premise use. As a result, Oracle addressed these issues in the Oracle Java Cloud Service, and scheduled the inclusion of related fixes in the following Critical Patch Updates upon completion of successful testing so as to avoid introducing regression issues in these products.
For more information:
The April 2014 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2014-1972952.html
More information about Oracle Software Security Assurance, including details about Oracle’s secure development and ongoing security assurance practices is located at http://www.oracle.com/us/support/assurance/overview/index.html
Log Buffer is globe trotting this week from end to end. From every nook, it has brought you some sparkling gems of blog posts. Enjoy!!!
On April 16th, Oracle announced the Oracle Virtual Compute Appliance X4-2.
Do your Cross Currency Receipts fail Create Accounting?
Oracle Solaris 11.2 Launch in NY.
WebCenter Portal 11gR1 dot8 Bundle Patch 3 (188.8.131.52.3) Released.
What do Sigma, a Leadership class and a webcast have in common?
Stairway to SQL Server Agent – Level 9: Understanding Jobs and Security.
SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.
SQL Server 2014 In-Memory OLTP Dynamic Management Views.
Why every SQL Server installation should be a cluster.
SQL Server Backup Crib Sheet.
Looking for Slave Consistency: Say Yes to –read-only and No to SUPER and –slave-skip-errors.
More details on disk IO-bound, update only for MongoDB, TokuMX and InnoDB.
Making the MTR rpl suite GTID_MODE Agnostic.
‘Open Source Appreciation Day’ draws OpenStack, MySQL and CentOS faithful.
MongoDB, TokuMX and InnoDB for disk IO-bound, update-only by PK.
Deploying JAXWS to JCS?? Getting "java.lang.ClassNotFoundException: org.apache.xalan.processor.TransformerFactoryImpl" error
- Deploying JAX-WS or a Spring app to Oracle Java Cloud XX.XX, and getting a "java.lang.ClassNotFoundException org.apache.xalan.processor.TransformerFactoryImpl" ? but the application works perfectly fine on a local Weblogic Server??
- The issue
- Its a bug on Java Cloud Server (bug#18241690), basically JCS is picking up the wrong XSL transformer
- In your code simply put the following piece of java code to execute when your application starts up
And all should be fine :-)