Hi, this is Eric Maurice.
Oracle just released Security Alert CVE-2014-0160 to address the publicly disclosed ‘Heartbleed’ vulnerability which affects a number of versions of the OpenSSL library. Due to the severity of this vulnerability, and the fact that active exploitation of this vulnerability is reported “in the wild,” Oracle recommends that customers of affected Oracle products apply the necessary patches as soon as they are released by Oracle.
The CVSS Base Score for this vulnerability is 5.0. This relative low score denotes the difficulty in coming up with a system that can rate the severity of all types of vulnerabilities, including the ones that constitute blended threat.
It is easy to exploit vulnerability CVE-2014-0160 with relative impunity as it is remotely exploitable without authentication over the Internet. However a successful exploit can only result in compromising the confidentiality of some of the data contained in the targeted systems. An active exploitation of the bug allows the malicious perpetrator to read the memory of the targeted system on which resides the vulnerable versions of the OpenSSL library. The vulnerability, on its own, does not allow a compromise of the availability (e.g., denial of service attack) or integrity of the targeted system (e.g., deletion of sensitive log files).
Unfortunately, this vulnerability is very serious in that it is contained into a widely used security package, which enables the use of SSL/TLS, and the compromise of that memory can have serious follow-on consequences. According to http://heartblead.com, the compromised data may contain passwords, private keys, and other sensitive information. In some instances, this information could be used by a malicious perpetrator to decrypt private information that was sent months or years ago, or log into systems with stolen identity. As a result, this vulnerability creates very significant risks including unauthorized access to systems with full user rights.
For more information:
The Advisory for Security Alert CVE-2014-0160 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2014-0160-2190703.html
The ‘OpenSSL Security Bug - Heartbleed / CVE-2014-0160’ page on OTN is located at http://www.oracle.com/technetwork/topics/security/opensslheartbleedcve-2014-0160-2188454.html
The ‘Heartbleed’ web site is located at http://www.heartbleed.com. Note that this site is not affiliated with Oracle.
Thought Leadership Webcast Series
Across the Enterprise
Five Steps for Mobilizing Digital Experiences
How Do You Deliver High-Value Moments of Engagement?
web and mobile have become primary channels for engaging with customers
today. To compete effectively, companies need to deliver multiple
digital experiences that are contextually relevant to customers and
valuable for the business—across various channels and on a global scale.
But doing so is a great challenge without the right strategies and
architectures in place.
As the kickoff of the new Digital Business Thought Leadership Series, noted industry analyst Geoffrey Bock investigated what some of Oracle’s customers are already doing, and how they are rapidly mobilizing the capabilities of their enterprise ecosystems.
Join us for a conversation about building your digital roadmap for the engaging enterprise. In this webcast you’ll have an opportunity to learn:
- How leading organizations are extending and mobilizing digital experiences for their customers, partners, and employees
- The key best practices for powering the high-value moments of engagement that deliver business value
- Business opportunities and challenges that exist for enterprise wide mobility to fuel multichannel experiences
Principal, Bock & Company
Senior Product Marketing Director, Oracle WebCenter
Copyright © 2014, Oracle and/or its affiliates.
Everyone “knows” that bitmap indexes are a disaster (compared to B-tree indexes) when it comes to DML. But at an event I spoke at recently someone made the point that they had observed that their data loading operations were faster when the table being loaded had bitmap indexes on it than when it had the equivalent B-tree indexes in place.
There’s a good reason why this can be the case. No prizes for working out what it is – and I’ll supply an answer in a couple of days time. (Hint – it may also be the reason why Oracle doesn’t use bitmap indexes to avoid the “foreign key locking” problem).
The presentation catalog (Web Catalog) stores the content that users create within OBIEE. While the Catalog uses the presentation layer objects, do not confuse the presentation layer within the RPD with the presentation catalog. The presentation catalog includes objects such as folders, shortcuts, filters, KPIs and dashboards. These objects are built using the presentation layer within the RPD.
The difference between RPD and Catalog security is that Repository level restrictions give the most flexibility as they can be either course-grained or fine-grained based on the data. Catalog level restrictions are more course-grained as they are applied to entire subject areas and/or objects.
To access an object in the catalog users must have security and can use either the BI client or web user interface. The BI client for the Web Catalog is installed along with the BI Admin client.Access Control Lists (ACL)
Access Control Lists (ACL) are defined for each object in the catalog. Within the file system the ACLs are stored in the *.ATR files which may be viewed through a HEX editor. A 16-digit binary representation is used similar to UNIX (e.g. 777). There are six different types of permissions for each object:
- Full control
- No Access
In 11g the catalog is located here:
Catalog Permission Reports
From a security perspective, the permission reports that are able to be generated from the Web Catalog client tool are very valuable and can be exported to Excel for further analysis. For example, these reports can provide system generated reports for who can avoid OBIEE security and issue Direct SQL or has rights to Write-Back to the database. The security ACL will report on who has such Administration privileges.
OBIEE Administration Privileges
BI Catalog Client
If you have questions, please contact us at email@example.com
-Michael Miller, CISSP-ISSMPReferences
- OBIEE Security Examined - Webinar and Presentation: OBIEE Security Examined Webinar
- OBIEE Security Examined - Whitepaper: OBIEE Security Examined
I think of myself as a developer, but my current role is in a small team running a small system. And by running, I mean that we are
- 'root' and 'Administrator' on our Linux and Windows servers
- 'oracle / sysdba' on the database side,
- the apex administrator account and the apex workspace administrators,
- the developers and testers,
- the people who set up (and revoke) application users and
- the people on the receiving end of the support email
The advantage of having all those hats, or at least all those passwords, is that when I'm looking at issues, I get to look pretty much EVERYWHERE.
I look at the SSH, FTP and mailserver logs owned by root. The SSH logs generally tell me who logged on where and from where. Some of that is for file transfers (some are SFTP, some are still FTP), some of it is the other members of the team logging on to run jobs. The system sends out lots of mail notifications, and occasionally they don't arrive so I check that log to see that it was sent (and if it may have been too big, or rejected by the gateway).
Also on the server are the Apache logs. We've got these on daily rotate going back a couple of years because it is a small enough system that the logs sizes don't matter. But Apex stuffs most of those field values into the URL as a GET, so they all get logged by Apache. I can get a good idea of what IP address was inquiring about a particular location or order by grepping the logs for the period in question.
I haven't often had the need to look in the Oracle alert logs or dump directories, but they are there if I want to run a trace on some code.
In contracts, I'm often looking at the V$ (and DBA_) views and tables. The database has some audit trail settings so we can track DDL and (some) logons. Most of the database access is via the Apex component, so there's only a connection pool there.
The SELECT ANY TABLE also gives us access to the underlying Apex tables that tell us the 'private' session state of variables, collections etc. (Scott Wesley blogged on this a while back). Oh, and it amazing how many people DON'T log out of an application, but just shut their browser (or computer) down. At least it amazed me.
The apex workspace logs stick around for a couple of weeks too, so they can be handy to see who was looking at which pages (because sometimes email us a screenshot of an error message without telling us how or where it popped up). Luckily error messages are logged in that workspace log.
We have internal application logs too. Emails sent, batch jobs run, people logging on, navigation menu items clicked. And some of our tables include columns with a DEFAULT from SYS_CONTEXT/USERENV (Module, Action, Client Identifier/Info) so we can automatically pick up details when a row is inserted.
All this metadata makes it a lot easier to find the cause of problems. It isn't voyeurism or spying. Honest.
A minimal Oracle Linux install contains a really small set of RPMs but typically not enough for a product to install on and a full/complete install contains way more packages than you need. While a full install is convenient, it also means that the likelihood of having to install an errata for a package is higher and as such the cost of patching and updating/maintaining systems increases.
In an effort to make it as easy as possible, we have created a number of pre-install RPM packages which don't really contain actual programs but they 're more or less dummy packages and a few configuration scripts. They are built around the concept that you have a minimal OL installation (configured to point to a yum repository) and all the RPMs/packages which the specific Oracle product requires to install cleanly and pass the pre-requisites will be dependencies for the pre-install script.
When you install the pre-install RPM, yum will calculate the dependencies, figure out which additional RPMs are needed beyond what's installed, download them and install them. The configuration scripts in the RPM will also set up a number of sysctl options, create the default user, etc. After installation of this pre-install RPM, you can confidently start the Oracle product installer.
We have released a pre-install RPM in the past for the Oracle Database (11g, 12c,..) and Oracle Enterprise Manager 12c agent. And we now also released a similar RPM for E-Business R12.
RDBMS and Performance
Frequently Misused Metrics in Oracle, from The Oracle Alchemist.
Some notes from Tyler Muth on Oracle Database 10.2 De-Supported.
Also from Tyler Muth, a quick posting on Create Bigfile Tablespace – Oracle Managed Files (OMF).
Recovering a standby over the network in 12c, from Martin's Blog.
Data Pump Enhancements in Oracle Database 12c, from the ORACLE-BASE Blog.
From the dbi services blog: Implementing Oracle Database as a Service (DBAAS).
MONITORING ORACLE GOLDEN GATE FROM SQL DEVELOPER, from the DBASOLVED blog.
How to restrict data coming back from a SOAP Call, from Angelo Santagata's Blog.
Oracle Internet Expenses
Oracle Fusion Expenses - Mobile app for R12, from the Oracle Internet Expenses blog.
APEX 5 first peek - Themes & Templates, from grassroots oracle.
A new video introducing EPM Mobile is available on YouTube. You can find this video, and other EPM videos, here: http://www.youtube.com/user/OracleEPMWebcasts
CVE-2013-5211 Input Validation vulnerability in NTP, from the Third Party Vulnerability Resolution Blog.
From the ever-useful LifeHacker: This Tipping Infographic Shows Who Expects Tips, and How Much.
A Guest Post by Heike Lorenz, Director of Global Product Marketing, Policy Automation
Making complex decisions EASY by automating your service policies allows
your organization to efficiently ensure the correct decisions are being
applied to the right people.
Like the hit British TV series Little Britain suggests, when “Computer Says No”, you can be left wondering why?
It’s not easy to automate your Customer Service polices, let alone do it in a way that is transparent, consistent and cost effective. Especially if you are working within environments where markets conditions and regulations change frequently. Get it wrong and you are left with compliance problems and customer complaints—and that’s a costly outcome!
So while you may not be striving to change the decision from a “NO” to a “YES” for your customer, you should be looking to get to that answer quicker for them, with a complete explanation as to why it’s a “NO”, have the traceability of what answer was given at that time, have the peace of mind that the answer is accurate, AND do it all at the lowest cost to your business. Simple right?!
So how do you achieve this? There are three core areas of consideration: 1) Centralize & Automate, 2) Personalize & Optimize, and 3) Analyze & Adapt.
1) Centralize & Automate
One method is to grab all of your policy documents, throw them at a team of costly developers to move into a database, code the logic around them, and hope what comes out is maintainable, accurate and accessible to the right audiences. Or, maybe not.
A simpler method is to take your original policy documents and import them into a policy automation tool that will help a business user through a step-by-step process to model the rules. Once developed, they can be tested, modified, published and updated within a few clicks. The result is a solution that can empower your agents with dynamic interviewing tools, and your customers with a self-service approach, across channels, in any language, and on any device.
But that is only part of the whole picture.
2) Personalize & Optimize
A simple decision process could be easily managed by one or two published FAQs, whereas a complex decision process requires you to take into account many personal attributes about that specific customer—and by definition those attributes can’t be applied through static views. Getting a customer to repeat information, or worse not even taking into consideration critical information that is provided within the interaction and personalizes the response, is a fast way to get them to abandon the process, or worse leave you!
You must ensure that your automated policies can be optimized to dynamically adapt to every customer’s unique situation and attributes—be that channel, device, location, language, or other more personal characteristics that are shared prior and during the interaction. After all, each answer should be uniquely theirs, explaining in detail why the decision was reached, with everything taken into consideration.
3) Analyze & Adapt
The saying “data rich and insight poor” is one that often fits with the word “compliance”—businesses can easily be more focused on capturing volumes of data for compliance, and less on making the data actionable. The flip side of that is “data poor” when businesses must scramble to get the data needed to ensure compliance, as an afterthought! And we all know that having insight without ability for timely action is a missed opportunity to improve, avoid, or sustain compliance.
As your policies change, or you introduce new policies, often the requirements to capture data can change too. Adapting to environmental or organizational changes requires you to gather the right data to deliver the right insight for action. The right tools are required in order to apply that insight in a timely, measurable, and effective manner. The right volume of accessible data is also needed to remain compliant with regulatory business or industry Customer Service standards during periodic audits. So you must have a solution that can adapt with scale, demand, change, and archive—a solution that can actually automate your service policies for insight, compliance, and agility—making it easy.
Putting all these pieces together lets you truly automate the nurturing of trusted relationships with your customers during complex decision-making processes, through transparent and personalized interactions. Giving your business confidence that in even the most demanding markets, you are remaining compliant, in a cost-effective and efficient way.
The Oracle Service Cloud empowers your business to care, take action and succeed in your Web Customer Service initiatives and become a Modern Customer Service organization.
In the next release of the PeopleSoft Interaction Hub (9.1/FP3), we will be deprecating direct Lotus Notes support as an email option. Customers that wish to use Lotus Notes in the future can still use our IMAP support.
If you haven’t talk to me IRL for the past 10 months, then I haven’t pestered you about the wonders of BLE and micro-location. My love affair with BLE (Bluetooth Low Energy) beacons became clear when I heard at WWDC 2013 that Apple was implementing BLE beacon detection in their CoreLocation framework. Apple showed how a small BLE beacon sending a constant signal (UUID + Major + Minor *) at a given interval could help for what is now known as micro-location.
At the time I just happened to be experimenting with wifi and bluetooth RSSI to accomplish similar results. I was prototyping a device that sniffed MAC addresses from surrounding devices and trigger certain interactions based on our enterprise software (CRM, HCM, etc). You can find more on this topic in the white paper “How the Internet of Things Will Change the User Experience Status Quo” (sorry but its not free) that I presented last year at the FiCloud conference.
The BLE beacon or iBeacon proved to be a better solution after all, given its user opt-in nature and low power consumption capabilities. Since then I have been prototyping different mobile apps using this technology. The latest of these is a Google Glass + iBeacon ( github link: GlassBeacon) example. I’m claiming to be the first to do this implementation since the ability to integrate BLE on Glass just became available on April 15 2014 :).
Stay tuned for more BLE beacon goodness. We will be showing more enterprise related use cases with this technology in the future.
*UUID: a unique id to distinguish your beacons. Major: used to group related sets of beacons. Minor: used to identify a beacon within a groupPossibly Related Posts:
- 2014 AT&T Developer Summit Hackathon
- Google Glass Details Emerge
- First 3 Days as a Glass Explorer (Prologue)
- Rapid Prototyping Tools
- Messing around with Glass and Fusion CRM for Kscope 13
Replication of data is always a fun thing to look at; What is replicating?! Discussions around, How do I get data from server/database A to server/database B or even to server/database C are valid questions and are often asked by management. Often the simple (knee jerk) answer is, just set it up and start replicating. Although Oracle GoldenGate may be simple (for some architectures) to meet the demands of management and the task at hand, problems will arise with the data being replicated.
when problems arise, the need to identify and resolve the replication issue becomes a critical and time consuming task. Oracle GoldenGate provides a few utility to help in diagnosing and resolving replication issues. One such utility is the LogDump utility. The LogDump utility is used to read the local and remote trail files that are used to support the continuous extraction and replication of transaction changes within the database.
Knowing what trail files are used for is part of the battle when troubleshooting replication issues with Oracle GoldenGate. How do we use LogDump to read these trail files? What are we looking for or at in a trail file to understand what is gong on? To answer these questions, we need to start the LogDump utility.
To start LogDump, we just need to be in the OGG_HOME and run the LogDump command. The below code set shows you how to run LogDump.
[oracle@oel oggcore_1]$ pwd /oracle/app/product/12.1.2/oggcore_1 [oracle@oel oggcore_1]$ ./logdump Oracle GoldenGate Log File Dump Utility for Oracle Version 22.214.171.124.0 17185003 17451407 Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved. Logdump 22 >
Note: Your LogDump session should start at 1 not 22 (Logdump 22). LogDump remembers session info until you log out of the server.
Once LogDump has been started, we need to open a trail file and setup how we want the information to be displayed. Commands for LogDump can be displayed by using the “help” command. In the following code block, we see that we are opening a local trail (lt) file and setting a few environment options.
Note: Trail files (local and remote) normally are pre-fixed with two (2) letters followed by a six ( 6 ) digit string. In new environments trail files will start with (prefix)000000 (lt000000 or rt000000).
Logdump 15 >open ./dirdat/lt000000 Current LogTrail is /oracle/app/product/12.1.2/oggcore_1/dirdat/lt000000 Logdump 16 >ghdr on Logdump 17 >detail on Logdump 18 >detail data Logdump 19 >usertoken on Logdump 20 >
The “help” command inside of LogDump provides more options. The options that we are using in this example are:
- ghdr on = toggle header display on | off
- detail on = toggle detailed data display (on | off | data)
- detail data = toggle detailed data display (on | off | data) (repeated this just to make sure)
- usertoken on = show user token information (on | off| detail)
With the LogDump environment set, we can now use the “next (n)” command to see the information in the trail file.
Logdump 20 > n
Once the header output is displayed, we need to understand how to read this information. Image 1 provides us with a quick explanation of each major component within a trial file transaction. We can see the following items for a transaction in trail file (lt000000):
- Header Area: Transaction information
- Data/Time and type of transaction
- Object associated with the transaction
- Image of transaction (before/after)
- Columns associated with the transaction
- Transaction data formatted in Hex
- Length of the record
- ASCII format of the data
- Record position within the trail file (RBA)
At this point, we maybe asking: Why is this important? Understanding the trail files and how to find information within the trail files is an important part of troubleshooting the Oracle GoldenGate environment.
Example: If a replicat abends and we need to start the replicat from a given RBA. Being able to identify the first, next and last RBA in the trail file is helpful in understanding why the abend happened and identifying a starting point to restarting successfully.
In the end, the Oracle GoldenGate environment can be simple yet complex at the same time. Understanding the different components of the environment is very useful and worth the time involved to learn it.
Filed under: Golden Gate
The Oracle Applications User Experience team is delighted to announce that our Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook is available for free.
The Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook
We’re sharing the same user experience design patterns, and their supporting guidance on page types and Oracle ADF components that Oracle uses to build simplified user interfaces (UIs) for the Oracle Sales Cloud and Oracle Human Capital Management (HCM) Cloud, with you so that you can build your own simplified UI solutions.
Design patterns offer big wins for applications builders because they are proven, reusable, and based on Oracle technology. They enable developers, partners, and customers to design and build the best user experiences consistently, shortening the application's development cycle, boosting designer and developer productivity, and lowering the overall time and cost of building a great user experience.
Now, Oracle partners, customers and the Oracle ADF community can share further in the Oracle Applications User Experience science and design expertise that brought the acclaimed simplified UIs to the Cloud and they can build their own UIs, simply and productively too!
Hi, this is Eric Maurice.
In addition to the release of the April 2014 Critical Patch Update, Oracle has also addressed the recently publicly disclosed issues in the Oracle Java Cloud Service. Note that the combination of this announcement with the release of the April 2014 Critical Patch Update is not coincidental or the result of the unfortunate public disclosure of exploit code, but rather the result of the need to coordinate the release of related fixes for our on-premise customers.
Shortly after issues were reported in the Oracle Java Cloud Service, Oracle determined that some of these issues were the result of certain security issues in Oracle products (though not Java SE), which are also licensed for traditional on-premise use. As a result, Oracle addressed these issues in the Oracle Java Cloud Service, and scheduled the inclusion of related fixes in the following Critical Patch Updates upon completion of successful testing so as to avoid introducing regression issues in these products.
For more information:
The April 2014 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2014-1972952.html
More information about Oracle Software Security Assurance, including details about Oracle’s secure development and ongoing security assurance practices is located at http://www.oracle.com/us/support/assurance/overview/index.html
Log Buffer is globe trotting this week from end to end. From every nook, it has brought you some sparkling gems of blog posts. Enjoy!!!
On April 16th, Oracle announced the Oracle Virtual Compute Appliance X4-2.
Do your Cross Currency Receipts fail Create Accounting?
Oracle Solaris 11.2 Launch in NY.
WebCenter Portal 11gR1 dot8 Bundle Patch 3 (126.96.36.199.3) Released.
What do Sigma, a Leadership class and a webcast have in common?
Stairway to SQL Server Agent – Level 9: Understanding Jobs and Security.
SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.
SQL Server 2014 In-Memory OLTP Dynamic Management Views.
Why every SQL Server installation should be a cluster.
SQL Server Backup Crib Sheet.
Looking for Slave Consistency: Say Yes to –read-only and No to SUPER and –slave-skip-errors.
More details on disk IO-bound, update only for MongoDB, TokuMX and InnoDB.
Making the MTR rpl suite GTID_MODE Agnostic.
‘Open Source Appreciation Day’ draws OpenStack, MySQL and CentOS faithful.
MongoDB, TokuMX and InnoDB for disk IO-bound, update-only by PK.
Deploying JAXWS to JCS?? Getting "java.lang.ClassNotFoundException: org.apache.xalan.processor.TransformerFactoryImpl" error
- Deploying JAX-WS or a Spring app to Oracle Java Cloud XX.XX, and getting a "java.lang.ClassNotFoundException org.apache.xalan.processor.TransformerFactoryImpl" ? but the application works perfectly fine on a local Weblogic Server??
- The issue
- Its a bug on Java Cloud Server (bug#18241690), basically JCS is picking up the wrong XSL transformer
- In your code simply put the following piece of java code to execute when your application starts up
And all should be fine :-)
We share our skills to maximize your revenue!
This is the fifth article in a series called Operationally Scalable Practices. The first article gives an introduction and the second article contains a general overview. In short, this series suggests a comprehensive and cogent blueprint to best position organizations and DBAs for growth.
We’ve looked in some depth at the process of defining a standard platform with an eye toward Oracle database use cases. Before moving on, it would be worthwhile to briefly touch on clustering.
Most organizations should hold off as long as possible before bringing clusters into their infrastructure. Clusters introduce a very significant new level of complexity. They will immediately drive some very expensive training and/or hiring demands – in addition to the already-expensive software licenses and maintenance fees. There will also be new development and engineering needed – perhaps even within application code itself – to support running your apps on clusters. In some industries, clusters have been very well marketed and many small-to-medium companies have made premature deployments. (Admittedly, my advice to hold off is partly a reaction to this.)When Clustering is Right
Nonetheless there definitely comes a point where clustering is the right move. There are four basic goals that drive cluster adoption:
- Parallel or distributed processing
- Fault tolerance
- Incremental growth
- Pooled resources for better utilization
I want to point out immediately that RAC is just one way of many ways to do clustering. Clustering can be done at many tiers (platform, database, application) and if you define it loosely then even an oracle database can be clustered in a number of ways.Distributed Processing
Stop for a moment and re-read the list of goals above. If you wanted to design a system to meet these goals, what technology would you use? I already suggested clusters – but that might not have been what came to your mind first. How about grid computing? I once worked with some researchers in Illinois who wrote programs to simulate protein folding and DNS sequencing. They used the Illinois BioGrid – composed of servers and clusters managed independently by three different universities across the state. How about cloud computing? The Obama Campaign in 2008 used EC2 to build their volunteer logistics and coordination platforms to dramatically scale up and down very rapidly on demand. According to the book In Search of Clusters by Gregory Pfister, these four reasons are the main drivers for clustering – but if they also apply to grids and clouds then then what’s the difference? Doesn’t it all accomplish the same thing?
In fact the exact definition of “clustering” can be a little vague and there is a lot of overlap between clouds, grids, clusters – and simple groups of servers with strong & mature standards. In some cases these terms might be more interchangeable than you would expect. Nonetheless there are some general conventions. Here is what I have observed:CLUSTER Old term, most strongly implies shared hardware resources of some kind, tight coupling and physical proximity of servers, and treatment of the group as a single unit for execution of tasks. While some level of single system image is presented to clients, each server may be individually administered and strong standards are desirable but not always implied. GRID Medium-aged term, implies looser coupling of servers, geographic dispersion, and perhaps cross-organizational ownership and administration. There will not be grid-wide standards for node configuration; individual nodes may be independently administered. The grid may be composed of multiple clusters. Strong standards do exist at a high level for management of jobs and inter-node communication.
Or, alternatively, the term “grid” may more loosely imply a group of servers where nodes/resources and jobs/services can easily be relocated as workload varies. CLOUD New term, implies service-based abstraction, virtualization and automation. It is extremely standardized with a bias toward enforcement through automation rather than policy. Servers are generally single-organization however service consumers are often external. Related to the term “utility computing” or the “as a service” terms (Software/SaaS, Platform/PaaS, Database/DaaS, Infrastructure/IaaS).
These days, the distributed processing field is a very exciting place because the technology is advancing rapidly on all fronts. Traditional relational databases are dealing with increasingly massive data volumes, and big data technology combined with pay-as-you-go cloud platforms and mature automation toolkits have given bootstrapped startups unforeseen access to extremely large-scale data processing.Building for Distributed Processing
Your business probably does not have big data. But the business case for some level of distributed processing will probably find you eventually. As I pointed out before, the standards and driving principles at very large organizations can benefit your commodity servers right now and eliminate many growing pains down the road.
In the second half of this article I will take a look at how this specifically applies to clustered Oracle databases. But I’m curious, are your server build standards ready for distributed processing? Could they accommodate clustering, grids or clouds? What kinds of standards do you think are most important to be ready for distributed processing?
Webcast: Database Cloning in Minutes using Oracle Enterprise Manager 12c Database as a Service Snap Clone
Since the demands
from the business for IT services is non-stop, creating copies of production
databases in order to develop, test and deploy new applications can be
labor intensive and time consuming. Users may also need to preserve private
copies of the database, so that they can go back to a point prior to when
a change was made in order to diagnose potential issues. Using Snap Clone,
users can create multiple snapshots of the database and “time
travel” across these snapshots to access data from any point
Join us for an in-depth
technical webcast and learn how Oracle Cloud Management Pack for Oracle
Database's capability called Snap Clone, can fundamentally improve the
efficiency and agility of administrators and QA Engineers while saving
CAPEX on storage. Benefits include:
- Agile provisioning
(~ 2 minutes to provision a 1 TB database)
- Over 90% storage
- Reduced administrative
overhead from integrated lifecycle management
April 24 — 10:00 a.m. PT | 1:00 p.m. ET
May 8 — 7:00 a.m. PT | 10:00 a.m. ET | 4:00 p.m. CET
May 22 — 10:00 a.m. PT | 1:00 p.m. ET
Introduction: Cars and Context
Like many people of a certain age, my first exposure to the term dashboard was when I heard my dad using it when driving the car. He referred to it as “the dash”.
Dad’s “dash” was an analog affair that told him the car’s speed, the miles traveled, the engine oil level and temperature, if he had enough gas in the tank, and a few other little bits of basic information. It was all whirring dials, trembling needle pointers on clock-style faces, switches to toggle on and off, a couple of sliders, and little lights that blinked when there was trouble.
Drivers in those days needed to pay attention, all the time, to their dashboards.
Old school car dashboards: quaint and charming. And a lot of work. (Source: WikiMedia Commons)
Dashboards in cars, and how drivers use them, are different now. The days of a dashboard with switches to flick or dials to turn are gone.
Today, a family car generates hundreds of megabytes of data every second. Most of this data is discarded immediately, and is not useful to the driver, but some is and may even be life saving. Technology makes sense of the surging data so that drivers can respond easily to important information because it’s presented to them in a timely, easily consumed, and actionable way.
Car dashboards are now closer to the “glass cockpit” world that fighter jet pilots experience. Cars have tiny sensors, even cameras, and other technology inside and outside the vehicle that detect and serve up striking digital visualizations about the health of the car and driver performance. Drivers are empowered to be “situationally aware” about what’s going on (what us UXers would call “context”), as they listen to or watch for signals and cues and respond to them naturally, using voice, for example.
Some car dashboards even use heads-up displays, projecting real-time information onto the windshield. Drivers know what’s going on with their car without taking their eyes off the road.
Chevrolet Corvette Heads-up Display (Source: www.chevrolet.com)
Dashboard design itself is now the essence of simplicity and cutting edge technology, and stylish with it too, arising passions about what makes a great interface inside a car. It’s all part of creating an experience to engage drivers for competitive advantage in a tight automobile market.
Tesla Model S Dashboard (Source: www.teslamotors.com)
The Emergence of Digital Dashboards User Experience
When it comes to software applications and websites, dashboards are around us everywhere too. We’re all long familiar with how such dashboards work and how to use them, beginning with the pioneering My Yahoo! portal that popularized the use of the “My” pronoun in web page titles, right through to today’s wearable apps dashboards that are a meisterwerk of information visualization, integrating social media and gamification along the way.
FitBit Dashboard (Source: Author)
An enterprise application dashboard is a one-stop shop of information. It’s a page made up of portlets or regions, chunking up related information into displays of graphs, charts, and graphics of different kinds. Dashboards visualize a breadth of information that spans a whole range of activities in a functional area.
Dashboards aggregate data into meaningful visual displays and cues, using processor horsepower at the backend to do the work that users used to do with notepads, calculators or spreadsheets to find what out what’s changed or in need of attention.
Dashboards enable users to prioritize work and to manage exceptions by taking light-weight actions immediately from the page, or to drill down to explore and do more in a transactional or analytics work area, if necessary.
The dashboard concept remains a core part of the enterprise applications user experience, particularly for work roles that rely on monitoring of information, providing reports on performance, or needing a range of information to make well-timed and high-level decisions.
In work, we now also have to deal with that other torrent of data we hear about: big data. Dashboards are ideal ways to make sense of this data and to represent the implications of its analysis to a viewer, bringing insight to users rather than the other way around.
To this end, Oracle provides enterprise application developers with the Oracle ADF Data Visualization Tools (DVT) components to build dashboards using data in the cloud, and with design guidance in the form of the Oracle Fusion Applications, Oracle Endeca and Oracle Business Intelligence Enterprise Edition UI patterns and guidelines for making great-looking dashboards.
Typical Oracle Fusion Applications Desktop UI Dashboard (Source: Oracle)
Beyond Desktop Dashboards…
Dashboards’ origins as a desktop UI concept obviously predated the “swipe and pinch” world of mobility, today’s cross-device, flexible way of working with shared data in the cloud. Sure, we still have a need for what dashboards were originally about. But, we now need new ways for big data to be organized and visualized. We need solutions that reflect our changing work situations--our context --so that we that we can act on the information quickly, using a tablet or a smart phone, or whatever’s optimal. And, we need new ways of describing this dashboard user experience.
Enter the era of “glance, scan, and commit”, a concept that we will explore in a future Usable Apps blog.