Skip navigation.

Feed aggregator

E-rows / A-rows

Jonathan Lewis - 9 hours 33 min ago

A recent post on the OTN database forum reminded me how easy it is to forget to keep repeating a piece of information after the first couple of hundred times you’ve explained it. No matter how “intuitively obvious” it is for one person, it’s new to someone else.

Here’s an execution plan that raised the question that prompted this note – it comes from calling dbms_xplan.display_cursor() with the ‘allstats last’ format option after enabling rowsource execution statisics (using hint gather_plan_statistics, or setting parameter statistics_level to all, or setting hidden parameter “_rowsource_execution_statistics” to true):

-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                        |      1 |        |   1989 |00:00:04.96 |    9280 |    897 |
|   1 |  NESTED LOOPS OUTER                   |                        |      1 |   2125 |   1989 |00:00:04.96 |    9280 |    897 |
|   2 |   NESTED LOOPS OUTER                  |                        |      1 |   2125 |   1989 |00:00:04.93 |    9271 |    895 |
|   3 |    NESTED LOOPS OUTER                 |                        |      1 |   2125 |   1989 |00:00:00.03 |    5732 |      0 |
|   4 |     COLLECTION ITERATOR PICKLER FETCH |                        |      1 |   1989 |   1989 |00:00:00.01 |       0 |      0 |
|*  5 |     TABLE ACCESS BY INDEX ROWID       | TABLE1                 |   1989 |      1 |   1178 |00:00:00.03 |    5732 |      0 |
|*  6 |      INDEX RANGE SCAN                 | IDX_TABLE1             |   1989 |      2 |   2197 |00:00:00.02 |    3545 |      0 |
|   7 |    TABLE ACCESS BY INDEX ROWID        | TABLE2                 |   1989 |      1 |   1178 |00:00:03.26 |    3539 |    895 |
|*  8 |     INDEX UNIQUE SCAN                 | IDX_TABLE2_PK          |   1989 |      1 |   1178 |00:00:03.25 |    2359 |    895 |
|   9 |   TABLE ACCESS BY INDEX ROWID         | TABLE3                 |   1989 |      1 |      0 |00:00:00.03 |       9 |      2 |
|* 10 |    INDEX UNIQUE SCAN                  | IDX_TABLE3_PK          |   1989 |      1 |      0 |00:00:00.03 |       9 |      2 |
-----------------------------------------------------------------------------------------------------------------------------------

Many people have heard about “tuning by cardinality feedback” (a term first used, I think, by Wolfgang Breitling many years ago); and it’s fairly common knowledge that this requires you to compare the predicted (estimated) E-rows with the actual A-rows generated by a line of the plan. A critical detail that often fails to appear when this point is being explained is that there is a very important difference between the meaning of E-rows and A-rows. Although this is a point I make very clear in my presentations on how to read execution plans, it was only when I found myself explaining it to Tony Hasler a couple of years ago that I realised that it’s not commonly known and that, in casual conversation, it’s easy to overlook the fact that it’s something that might need to be mentioned explicitly. So here’s the critical detail:

  • E-Rows is the number of rows the optimizer is expecting the operation to produce each time it is called
  • A-Rows is the cumulative number of rows summed across all calls to the operation

As a consequence of this difference any comparison you should not be comparing A-Rows with E-rows, you should be comparing A-Rows with E-Rows * Starts so, for example,  operation 6 in the plan above (index range scan idx_table1) shows a reasonable estimate because 1,989 starts at 2 rows per start is fairly close to the final 2,197 rows produced (especially when you consider that there’s a lot of integer rounding going on to display the values).

 


Why You Should Never Use MongoDB « Sarah Mei

Jeff Moss - 10 hours 15 min ago

An interesting article from Sarah – much good advice there!

Source: Why You Should Never Use MongoDB « Sarah Mei

Samsung Developer Conference: VR and More

Oracle AppsLab - Wed, 2016-05-04 15:48

VR was the big thing at the Samsung Developer Conference, and one of the points that got driven across, both in the keynotes and in other talks throughout the day, was that VR is a fundamentally new medium—something we haven’t seen since the motion picture.Photo Apr 27, 10 37 04 AM

Injong Rhee, the executive VP of R&D for Software and Services, laid out some of VR’s main application areas: Gaming, Sports, Travel, Education, Theme Parks, Animation, Music, and Real Estate. Nothing too new here, but it is a good summary of the major use cases, and they echo what we’ve heard in our own research.

Photo Apr 27, 10 39 58 AM

He also mentioned some of their biggest areas for innovation: Weight, dizziness, image quality, insufficient computing power, restricted mobility, limited input control. For anyone who’s tried the Gear VR and had to use the control pad on the side of the visor, I think we can agree it’s not ideal for long periods of time. And while some VR apps leave me and others with no nausea at all, other apps, where you’re moving around and stepping up and down, can certainly cause some discomfort. I’m curious to see how some of those problems of basic human physiology can be overcome.Photo Apr 27, 10 41 04 AM

A fascinating session after the keynote was with Brett Leonard, who many years ago directed Lawnmower Man, a cautionary tale about VR, which despite the bleak dystopic possibilities it portrayed, inspired many of today’s VR pioneers. Leonard appeared with his brother Greg, a composer, and Frank Serafine, an Oscar-award winning sound designer who did the sound for Lawnmower Man.

Brett, Greg, and Frank made a solid case for VR as a new medium that has yet to be even partially explored, and will surely have a plethora of new conventions that storytellers will need to work with. We’ve become familiar with many aspects of the language of film, such as things happening off screen but are implied to be happening. But with the 360-degree experience of VR, there’s no longer that same framing of shots, or things happening off the screen. The viewer chooses where to look.

Brett also listed his five laws of VR, which cover some of his concerns, given that it is a powerful medium that could have real consequences for people’s minds and physiology, particularly developing children. His laws, very paraphrased are:

  1. Take it seriously.
  2. VR should promote interconnecting with humanity, not further reinforcing all the walls we already have, and that technology so far has helped to create.
  3. VR is its own reality.
  4. VR should be a safe space—there are a huge amount of innovations possible, things that we haven’t been able to consider before. This may be especially so for medical and psychological treatments.
  5. VR is the medium of the global human.

Another interesting part of the talk was about true 360-degree sound, which Serafine said hadn’t really been done well before, but with the upcoming Dolby Atmos theaters, finally has.

Good 360-degree sound, not just stereo like we’re used to, will be a big part of VR feeling increasingly real, and will pose a challenge for VR storytelling, because it means recording becomes more complex, and consequently editing and mixing.

Connected Car

Samsung also announced their effort for the connected car, with a device that looks a lot like the Automatic (previously blogged about here) or the Mojio. It will offer all the features of those other devices—driving feedback that can become a driver score (measuring hard braking, fast accelerating, hard turns, and the like), as well as an LTE connection that allows it to stay connected all the time and serve as a WiFi hotspot. But Samsung adds a little more interest to the game with vendor collaborations, like with Fiat, where you can unlock the car, or open the trunk from your app. This can’t currently be done with other devices.

It should come out later this year, and will also have a fleet offering, which should appeal to enterprise companies. If they have more of these exclusive offering because of Samsung’s relationships with various vendors, maybe it will do better than its competitors.Possibly Related Posts:

Database as a Service

Pat Shuff - Wed, 2016-05-04 15:38
Today we are going to dive into Database as a Service offered from Oracle. This product is the same product offered by Oracle as a perpetual processor license or perpetual named user license for running database software in your data center. The key different is that the database is provisioned onto a Linux server in the cloud and rather than paying $47,500 for a processor license and 22% annually after that, you pay for the database services on an hourly or monthly basis. If you have a problem that needs only a few weeks, you pay for the service for a few weeks. If you have a problem that takes a very large number of processors but for a very short period of time, you can effectively lease the large number of processors in the cloud and purchase a much smaller number of processors in your data center. Think of a student registration system. If you have 20K-30K students that need to log into a class registration system, you need to size this server for the peak number of students going through the system. In our example, we might need an 8 core system to handle the load during class registration. Outside the two or three weeks for registration, this system sits idle at less than 10% utilization because it is used to record and report grades during the semester. Rather than paying $47.5K times 8 cores times 0.5 for an x86 or Sparc server ($190K), we only have to pay $47.5K times 2 cores times 0.5 for x86 or Sparc cores ($47.5K) and lease the additional processors in the cloud for a month at $3K/core/month ($24K). We effectively reduced the cost from $190K to $71.5K by using the cloud for the peak period. Even if we do this three times during the year the price is $119.5K which is a cost savings of $70.5K. The second year we would be required to pay $41.8K in support cost for the larger server. By using the smaller server we drop the support cost to $10.5K. This effectively pays for leasing a third of the cloud resources by using a smaller server and bursting to the cloud for high peak utilization.

Now that we have looked at one of our use cases and the cost savings associated with using the cloud for peak utilization and reducing the cost of on servers and software in our data center, let's dive into the pricing and configuration of Database as a Service (DBaaS) offered by Oracle in the public cloud services. If we click on the Platform -> Database menu we see the following page.

If we scroll down to the bottom we see that there are effectively three services that we can use in the public cloud. The first is Database Schema as a Service. This allows you to access a database through a web interface and write programs to read and present data to the users. This is the traditional Application Express interface or APEX interface that was introduced in Oracle 9. This is a shared service where you are given a database instance that is shared with other users. The second service is Database as a Service. This is the 11g or 12c database installed on a Linux installation in the cloud. This is a full installation of the database with ssh access to the operating system and sqlplus access to the database from a client system. The third service is Exadata as a Service. This is the Oracle database on dedicated hardware that is optimized to run the Oracle database.

The Schema as a Service is also known as Application Express. If you have never played with apex.oracle.com, click on the link and register for a free account. You can create an instance, a database schema, and store upto 10 MB or 25 MB of data for free. If you want to purchase a larger storage amount it is sold in 5 GB, 20 GB, or 50 GB increments.

The 10 or 25 MB instance is free. The 5 GB instance is $175/month. The 20 GB is $900/month, and the 50 GB is $2,000/month.

Tomorrow we will dive a little deeper into Schema as a Service. In summary, this is a database instance that can contain multiple tables and has an application development/application web front end allowing you to access the database. You can not attach with sqlplus. You can not attach with port 1521. You can not put a Java or PHP front end in front of your database and use it as a back end repository. You can expose database data through applications and REST api interfaces. This instance is shared on a single computer with other instances. You can have multiple instances on the same computer and the login give you access to your applications and your data in your instance.

The Database as a Service (DBaaS) is slightly different. With this you are getting a Linux instance that has been provisioned with a database. It is a fully deployed, fully provisioned database based on your selection criteria. There are many options when you provision DBaaS. Some of the options are virtual vs full instance, 11g vs 12c, standard edition vs enterprise edition vs enterprise edition high performance vs enterprise edition extreme performance. You need to provide an expected data size and if you plan on backing up the data and a cloud object repository if you do. You need to provide ssh keys to login as oracle or opc/root to manage the database and operating system. You also need to pick a password for the sys/system user inside the database. Finally, you need to pick the processor and memory shape that will run the database. All of these options have a pricing impact. All of these options effect functionality. It is important to know what each of these options means.

Let's dive into some of these options. First, virtual vs full instance. If you pick a full instance you will get an Oracle Enterprise Linux installation that has the version of the database that you requested fully installed and operational. For standard installations the file system is the logical volume manager and the file system is provisioned across four file systems. The /u01 file system is the ORACLE_HOME. This is where the database binary is installed. The /u02 file system is the +DATA area. This is where table extents and table data is located. The /u03 file system is the +FRA area. This is where backups are dropped using the RMAN command which should run automatically every night for incremental backups and 2am on Sunday morning for a full backup. You can change the times and backup configurations with command line options. The /u04 area is teh +RECO area. This is where change logs and other log files are dropped. If you are using Data Guard to replicate data to another database or from another database, this is where the change logs are found.

If you pick a virtual instance you basically get a root file system running Oracle Enterprise Linux with a tar ball that contains the oracle database. You can mount file systems as desired and install the database as you have it installed in your data center. This configuration is intended to mirror what you have on-premise to test patches and new features. If you put everything into /u01 then install everything that way. If you put everything in the root file system, you have the freedom to do so even though this is not the recommended best practice.

The question that you are not asked when you try to create a DBaaS is if this service is metered or non-metered. This question is asked when you create your identity domain. If you request a metered service, you have the flexibility to select the shapes that you want and if you are billed hourly or monthly. The rates are determined by the processor shape, amount of memory, and what database option you select (standard, enterprise, high performance, or extreme performance). More on that later. With the metered option you are free to stop the database (but not delete it) and retain your data. You suspend the consumption of the database license but not the compute and storage. This is a good way of saving a configuration for later testing and not getting charged for using it. Think of it as having an Uber driver sit outside the store but not charge you to sit there. When you get back in the car the charge starts. A better analogy would be the Cars2Go. You can reserve a car for a few hours and drive it from Houston to Austin. You park the car in the Cars2Go parking slot next to the convention center and don't pay for parking. You come out at the end of your conference, swipe your credit card and drive the car back to Houston. You only get charged for the car when it is between parking lots. You don't get charged for it while it is parked in the reserved slot. You pay a monthly charge for the service (think of compute and storage) at a much lower rate. If you think of a non-metered service as renting a car from a car rental place, you pay for the car that they give you and it is your until you return it to the car rental place. You can't not pay for the car while you are in your convention as with Card2Go. You have to pay for parking at the hotel or convention center. You can't decide half way into your trip that you really need a truck instead of a car or a mini-van to hold more people and change out cars. The rental company will end your current agreement and start a new one with the new vehicle. Non-metered services are similar. If you select an OC3M shape then you can't upgrade it to an OC5 to get more cores. You can't decide that you need to use the diagnostics and tuning and upgrade from enterprise edition to enterprise edition high performance. You get what you started with and have 12 months to consume the services reserved for you.

The choice of 11g or 12c is a relatively simple one. You get 11.2.0.4 running on Oracle Enterprise Linux 6.6 or you get 12.1.0.2 running on Oracle Enterprise Linux 6.6. This is one of those binary questions. You get 11g or 12c. It really does not effect any other question. It does effect features because 12c has more features available to it but this choice is simple. Unfortunately, you can't select 11.2.0.3 or 10.whatever or 9.whatever. You get the latest running version of the database and have an option to upgrade to the next release when it is available or not upgrade. Upgrades and patches are applied after you approve them.

The next choice is the type of database. We will dive into this deeper in a couple of days. The basic is that you pick Standard Edition or Enterprise Edition. You have the option of picking just the base Enterprise Edition with encryption only, with most of the options in the High Performance Option, or all of the options with Extreme Performance Option. The difference between High Performance and Exterme Performance is the Extreme included Active DataGuard, In-Memory options, and Real Application Clustering options. Again, we will dive into this deeper in a later blog entry.

The final option is the configuration of the database. I wanted to include a screen shot here but the main options that we look at are the CPU and memory shape which dictates the database consumption cost as well as the amount of storage for table space (/u02) and backup space (/u03 and /u04). There are additional charges above 128 GB for table storage and for backups. We will not go into the other options on this screen in this blog entry.

In summary, DBaaS is charged on a metered or un-metered basis. The un-metered is a lower cost option but less flexible. If you know exactly what you need and the time that it is needed, this is a better option. Costs are fixed. Expenses are predictable. If you don't know what you need, metered service might be better. It gives you the option of starting and stopping different processor counts, shutting off the database to save money, and select different options to test out different features. Look at the cost option and a blog that we will do in a few days analyzing the details on cost. Basically, the database can be mentally budgeted as $3K/OCPU/month for Enterprise Edition, $4K/OCPU/month for High Performance, and $5K/OCPU/month for Extreme Performance. Metered options typically cross over at 21 days. If you use metered service for more than 21 days your charges will exceed this amount. If you use it for less, it will cost less.

The Exadata as a Service is a special use case of Database as a Service. In this service you are getting a quarter, half, or full rack of hardware that is running the database. You get dedicated hardware that is tuned and optimized to run the Oracle database. Storage is dedicated to your compute nodes and not one else can use these components. You get 16, 56, or 112 processors dedicated to your database. You can add additional processors to get more database power. This service is available in a metered or non-metered option. All of the database options are available with this product. All of the processors are clustered into one database and you can run one or many instances of a database in this hardware. With the 12c option you get multi-tenant features so that you can run multiple instances and manage them with the same management tools but give users full access to their instance but not other instances running on the same database.

Exadata cost for metered services

Exadata cost for non-metered services

In summary, there are two options for database as a service. You can get a web based front end to a database and access all of your data through http and https calls. You can get a full database running on a Linux server or Linux cluster that is dedicated to you. You can consume these services on a an hourly, monthly, or yearly basis. You can decide on less expensive or more expensive options as well as how much processor, memory, and storage that you want to allocate to these services. Tomorrow, we will dive a little deeper into APEX or Schema as a Service and look at how it compares to services offered by Amazon and Azure.

Oracle Security And Delphix Paper and Video Available

Pete Finnigan - Wed, 2016-05-04 15:05

I did a webinar with Delphix on 30th March 2016 on USA time. This was a very good session with some great questions at the end from the attendees. I did a talk on Oracle Security in general, securing non-production....[Read More]

Posted by Pete On 01/04/16 At 03:43 PM

Categories: Security Blogs

3 Days of Oracle Security Training In York, UK

Pete Finnigan - Wed, 2016-05-04 15:05

I have just updated the public Oracle Security training dates on our Oracle Security training page to remove the public trainings that have already taken place this year and to add a new training in York for 2016. After the....[Read More]

Posted by Pete On 31/03/16 At 01:53 PM

Categories: Security Blogs

Oracle Data Masking and Secure Test Databases

Pete Finnigan - Wed, 2016-05-04 15:05

My daily work is helping my customers secure their Oracle databases. I do this in many ways from performing detailed security audits of key databases to helping in design of secure lock down policies to creating audit trails to teaching....[Read More]

Posted by Pete On 14/03/16 At 08:45 AM

Categories: Security Blogs

BOF: A Sample Application For Testing Oracle Security

Pete Finnigan - Wed, 2016-05-04 15:05

In my Oracle security training classes I use a couple of sample applications for various demonstrations. I teach people how to perform security audits of Oracle databases, secure coding in PL/SQL, designing audit trail solutions and locking down Oracle. We....[Read More]

Posted by Pete On 10/03/16 At 11:07 AM

Categories: Security Blogs

Two New Oracle Security Presentations Available

Pete Finnigan - Wed, 2016-05-04 15:05

I attended the UKOUG conference last week Monday to Wednesday in Birmingham. This is the first year for three years that it has been back at the ICC in the center of Birmingham. The last two years have seen the....[Read More]

Posted by Pete On 14/12/15 At 08:54 PM

Categories: Security Blogs

Oracle Security Training In York

Pete Finnigan - Wed, 2016-05-04 15:05

We ran a five day Oracle Security training event in York, England from September 21st to September 25th at the Holiday Inn hotel. This proved to be very successful and good fun. The event included back to back teaching by....[Read More]

Posted by Pete On 22/10/15 At 08:49 PM

Categories: Security Blogs

New Presentation - Building Practical Oracle Audit Trails

Pete Finnigan - Wed, 2016-05-04 15:05

I wrote a presentation on designing and building practical audit trails back in 2012 and presented it once and then never again. By chance I did not post the pdf's of these slides at that time. I did though some....[Read More]

Posted by Pete On 01/10/15 At 05:16 PM

Categories: Security Blogs

Protect Your APEX Application PL/SQL Source Code

Pete Finnigan - Wed, 2016-05-04 15:05

Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]

Posted by Pete On 21/07/15 At 04:27 PM

Categories: Security Blogs

Enjoy these Snappy Video Clips from Oracle's Modern Customer Experience

Linda Fishman Hoyle - Wed, 2016-05-04 12:39
Oracle's Modern Customer Experience held in Las Vegas, NV, April 26-28, 2016, proved to be an amazing gathering of more than 4,200 CX professionals, including marketers, sales people, customer service executives, and e-commerce professionals. It was a high-energy conference powered by masterful presentations from Troy Carter, founder/CEO of Atom Factory, entrepreneur, investor, and manager, and Mark Hurd, Oracle co-CEO.

The vibe was felt all over the MGM compound, but even more so in the demo and exhibit area. This is where video mavens Ruth Kale-Fok, Maureen Boctor, Brian Mock, David Hope-Ross, Martin Taylor, and Kathryn Perry were working their camera and interview magic.

You know, some people freeze when they are asked to speak on camera.

Generally, analysts and influencers don't fall into that camp.

True to form, the analysts and influencers at the conference were more than happy to share their viewpoints on the importance of customer experience and the role Oracle is playing with its end-to-end solution.

Watch this short video starring Dr. Natalie Petouhoff from Constellation Research, Keith Dawson and Aphrodite Brinsmead from Ovum, Rebecca Wetterman from Nucleus Research, and Mark Smith from Ventana Research.

They also captured some great quotes from conference attendees in these video clips:

Want to grow your revenue? Ask our CEO how …

Linda Fishman Hoyle - Wed, 2016-05-04 11:02
At the Modern CX Experience last week in Las Vegas, Oracle's co-CEO Mark Hurd told the audience that CEOs have about 18 quarters (4.5 years) to prove themselves before activist shareholders get involved. In an environment where global economic growth is anemic at best and rapidly declining at worst (consider Russia, Brazil, etc.), most CEOs look for quick ways to gain upside on earnings vs. revenue.
Many turn to slashing expenses as a sure way to deliver desired earnings results. Others try to build new products or lower prices to lure customers from the competition. But what they miss is that their success, and that of their company, lies with their existing customers. The American Customer Satisfaction Index says:
Companies with happier customers give five times the return than companies with unhappy customers.
A 5x return would validate any CEO―and thrill his or her board members and shareholders. But pushing good customer service to get happier customers isn’t exactly jazzy or unique for a nervous CEO who needs to show results quickly.
However, the research doesn’t lie; nor do our personal experiences. In his keynote, Mark shared his frustration about his bank and his US-based cell phone service provider. Both companies have siloed processes which translate into unnecessary and irritating interactions―and plenty of lost opportunities to engage and delight customers like him.
So Mark’s advice to the CEOs and professionals in the audience was to step up their customer experience games with a suite of integrated applications. “There are lots of CX providers so it’s easy to go down the path of more silos; you’ll get some benefits, innovation, but the suite is the real benefit,” said Mark.  The recipe for success is really pretty simple―get to know your customers’ wants and needs and serve them well.
Takeaways:
Oracle is the only vendor with an integrated suite.
Oracle is the leader.
Oracle is committed to providing best-of-breed integrated solutions in each area.
Oracle can help every CEO move closer to that 5x return.

Go Mobile and Upgrade your Oracle Forms with Auraplayer

WebCenter Team - Wed, 2016-05-04 09:38

Author: Mitchell Palski – Oracle Fusion Middleware Specialist

Let’s start off with this? Why would you upgrade your Oracle Forms applications? Oracle Forms Builder and Oracle Forms Services have been upgraded to simplify the development and deployment of Forms applications on the Web. In a world where very few applications stand-alone and need to integrate into their Enterprise, Oracle recognizes the importance of being able to persist your legacy Forms applications into the future while enabling your Service-Oriented Architecture to leverage their capabilities. The future of Forms includes improving the Java-based Web user interface and extending product "openness" by allowing Java integration on all three tiers

Aside from improving your traditional Oracle Forms offering, there is a new consideration that many of us have never had to deal with in the past – mobile access. Oracle Forms and Reports are such convenient tools for developing applications for our mobile workforce, but yet they’ve been restricted to the use of laptops and desktop machines. Here’s the good news – Oracle partner Auraplayer has the skillsets and the software to help your organization quickly and efficiently overcome that obstacle.

Why did I say “skillset”? Well, Auraplayer has some really talented people in their management ranks that have been working with Oracle’s Fusion Middleware technology for a long time. One of the challenges that I had when I first heard about Auraplayer was understanding their value for my customers, so I talked to CEO Mia Urman and she helped explain to me that Auraplayer isn’t just selling a product (you’ll hear about that in a minute), they’re also helping customers upgrade. Oracle Forms has been around for so long that there aren’t a ton of projects out there that have the same personnel on staff as when they first started. Auraplayer brings the industry expertise to mitigate your risks and deliver a successful Forms upgrade so your team can start taking advantage of all the new features 12c has to offer.

Upgrades aren’t the most exciting projects in the world, so woopty-doo right? Well, the reason that I’m excited is because if you’re working with Auraplayer to upgrade your Forms application why not let them tell you about their Mobile offering while they’re their? I won’t try to go through all of their features in this post today, but let me give you the highlights (from my perspective):
  1. You don’t have to change your Forms applications. Re-writing code to enable a mobile UI can be costly, it can take a long time to complete, and it requires support from your development team after deployment.
  2. Auraplayer will REST-enable your Forms application so you can consume services across the Enterprise, making integration easy and mobile development fast and efficient.
  3. Couple Auraplayer with Oracle Mobile Cloud Service and you have a complete project plan from start to finish.
    • Define and enable processes
    • Automatically generate services
    • Develop UI and consume services – works on any mobile device

So, do your end users have smart phones and tablets? Yes they do. Do you need up upgrade your legacy Forms apps and get those users actively using them in the field? You tell me!

New York City Metropolitan Transportation Authority (NYC MTA) figured out a way to use Auraplayer to “work smarter, not harder” by leveraging Auraplayer. Read about their success here: https://blogs.oracle.com/mobile/entry/new_york_mta_mobile_cloud 

If you want to learn more, check out these videos by Grant Ronald:

What is Oracle Cloud Management Pack in Enterprise Manager Cloud Control?

Arun Bavera - Wed, 2016-05-04 09:28
Cloud Management pack basically consists of these features:
These packs are basically used to setup your private in-house Cloud Services ( MWaaS, DBaaS, IaaS, PaaS, SCHaaS, SnapClone etc…)
Without these packs we cannot use any self-service provisioning features.
With DBLM you can use Deployment Procedure to create/clone new DB’s but you cannot use self-service portal.
The traditional DB LifeCycle Management provides all the management of DB but there are some exception of features like below:
Note: The ability to create 'new' full clones is licensed under Database Lifecycle Management Pack for Oracle Database, but the additional 'refresh' function requires licensing the Cloud Management Pack for Oracle Database.
clip_image001
clip_image002

Let us discuss a Scenario where Customer may require Hybrid Cloud Management capabilities:
Customer OMS is running on-premise and they have some cloud services subscriptions also.
If they just want to manage their cloud service targets they can use Hybrid Agent to manage making sure they have the traditional license pack for those targets to have single pane of view of all their on-premise traditional targets and Cloud targets.
They can also use the REST API to manage and view their cloud services which comes with cloud service offerings.
If the customer is not going to build any in-house private cloud service ( MWaaS, DBaaS, IaaS, PaaS, SCHaaS, SnapClone etc…) using OEM, they may not require Cloud Management packs.
Cloud Management pack provides top up features to their traditional packs.
Oracle Cloud Management Pack for Oracle DatabasePrerequisitesThe Oracle Cloud Management Pack for Oracle Database requires the Database Lifecycle Management Pack for Oracle Database.
 
As of now to to raise a request to create an instance of DBCS or JCS  on Oracle Cloud  you have to use REST API or  Cloud UI.
Hope we will soon have integration in OEM cloud management pack capability to raise service request directly against Oracle Cloud Service ( Cloud service hosted at  Oracle Data center or at customer as Private Cloud Service)
which will create an instance in cloud and will discover the assets and manage in OEM .






Categories: Development

Transforming Analytics and Reporting

PeopleSoft Technology Blog - Wed, 2016-05-04 09:12

Content contributed by Balaji Pattabhiraman

A few months ago, I wrote in this blog about a feature in PeopleTools called Simplified Analytics.  This is really transformative technology that puts tremendous power in the hands of end users, enabling them to perform ad-hoc analytics right in context of their transactions and business processes.  It makes reporting timely, secure, and relevant.

Let's expand on that concept and take advantage of some other PeopleTools featues.  In this case, imagine I am a manager or subject area expert.  I've created several simplified analytics that I want to make available to my team for use in their daily work.  Simplified analytics enables me to create a tile from any analytic I create.  Those tiles can then be placed on any Fluid home page or Dashboard, which is another new PeopleTools feature.  Lets see an example of this.

As a business administrator I've created several simplifed analytics for the My Team page.  These are available in the My Analytics tab in Related Information.

Note that each tab on this page--Summary, Performance, Compensation, and Leave Balances--contains different analytics relevant to each topic.  I've created these analytics  to provide better decision support for these processes.

This calls for a quick review of an important part of the Simplified Analytics process.  When creating or editing the analytic, I have the option of publishing the analytic so it can be used by others.  (See this post for more info on creating a Simplified Analytic.)



When I choose to publish, I have the option of publishing to the My Analytics section of the Related Actions frame or publishing to the Tile Repository. Since I'll be creating an analytic dashboard, I'll save to the repository.



Now I can go to the Personalize option from the menu in any home page.  From here I can create my new analytic home page.  (This could be created as a Dashboard as well.)



In this case, I'll create a new home page called Team Analytics.  This will provide a single access point for our group of analytics, making them available in a convenient form to all team members.



Now that I've created the new home page, I can add tiles representing the analytics that were created previously.



I'll add several analytic tiles and make this home page a robust and valuable analytic tool.



Once I save the page it is displayed.  Here you see the home page with all the tiles I've added.



Now let's publish the home page and make it available to the team so all can benefit from it.  (Note: If I don't publish the home page to other users I can still use it as a personal home page.  One might do this for analytics that are used exclusively by me.) 



Enter a name and label and any other field data necessary. In this case I'm making it public, but in many cases you will want to make a page like this available only to a particular role or permission list.

Now when someone on our team logs in, (in this case Rosanna) she will have access to the new Team Analytics page.

Note that the data from the analytics correspond to Rosanna's team.  If the same analytics were published to a different group, the data would be relevant to that group.  The data, context, and security are determined by the PS Query that forms the basis of the analytic.

This illustrates how you can extend the power of Simplified Analytics to make valuable decision support available to teams, not just individuals.

About Oracle: DBA_DEPENDENCY_COLUMNS

Jeff Moss - Wed, 2016-05-04 07:17

A colleague asked if there was a way to do column level dependency tracking recently. He wanted to know for a given view, which tables and the columns on those tables, it was dependent upon, without, of course, reading through the code.

I was vaguely aware that since 11gR1 Oracle has been tracking fine grained (column) dependencies, but couldn’t find a way of seeing the details stored, until I found this interesting article from Rob Van Wijk:

About Oracle: DBA_DEPENDENCY_COLUMNS

I passed the details on to our DBA who implemented it and it seemed to work, for us. Your mileage may vary, of course.

Enable Copy n Paste in SQL*Plus command line on Windows

Jeff Moss - Wed, 2016-05-04 00:04

My colleague asked me yesterday how to enable copy and paste in the command line SQL*Plus window on Windows 7 – a simple enough task…

On the shortcut that starts the command line version of SQL*Plus, right click and bring up the Properties dialog. Nagivate to the Options tab and make sure the QuickEdit mode is checked on, as below:

image

Now start SQL*Plus and you’ll find that you can hold the left mouse button down whilst dragging a selection area and then pressing return copies the selected text, whilst pressing the right mouse button pastes the copied text.

If you’d prefer to read this from a Microsoft source, try here, where other methods of setting this up are detailed as well as enabling the Autocomplete facility.