Feed aggregator

Video Tutorial: XPLAN_ASH Active Session History - Part 12

Randolf Geist - Sat, 2016-09-24 17:27
The final part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.


This is the final part of this tutorial, but there are more tutorials coming - about configuring the script, script internals and also the Rowsource Statistics mode of the script.

Modern software architecture – what is a database?

Yann Neuhaus - Sat, 2016-09-24 17:03

This blog post is focused at developers and software architects. I’m probably not writing at the right place. You’re on an infrastructure experts blog and the author is an Oracle DBA. So what can you learn from someone working on that 30 years old technology talking about that old SQL language ? You run with modern languages, powerful frameworks, multi-layer architecture, micro-services, distributed database and of course all open-source. You hate your DBA because he is the major slow-down for your agile development. You don’t want SQL. You don’t want databases. You don’t want DBA.

How can I encourage you to read this blog post? I was not always an DBA. I started as a developer, more than 20 years ago. And believe me, it was not prehistory at all. Object-Oriented design, Rapid Application Development, Automatic programming (remember C.A.S.E.?), visual programming (have you ever seen an IDE like IBM Visual Age?), query generation (early days of Business-Objects). All these evolved with more and more languages, frameworks, layers, micro-services, XML, SOA, JSON, REST,… but only one technology remained: the critial persistent data is still in a relational database and accessed by SQL.

What is a database

Most of developers think that a database is there to store and retrieve data. I’m sorry but that’s wrong. That may have been right a long time ago, with key-value storage and hierarchical databases, but that’s too old for me. When I started to work, databases were already doing far more than that. Let me explain. With those prehistoric databases, you retrieved data in the same way you stored it. You insert with a key, you fetch with that key. It is easy to explain to modern developers because they “invented” it few years ago, calling it CRUD (Create Read Update Delete). First argument of those CRUD methods is a key value. And you store unformatted data as XML or JSON associated to that value. If this is the only feature that you need, then for sure you don’t want a database.

Relational database management systems (RDBMS) are doing a lot more than that. First, you can query data in a completely different way than you inserted it. And this is the real life-cycle of data. For example, You take orders, one by one, with customer and product information for each of them. Of course you can update and read it with the order ID that has been generated, but that’s only a small use case and probably not the most critical. Warehouse users will query (and update) orders by product. Delivery users will query (and update) orders by customer. Marketing users will query by channels and many other dimensions. Finance will join with accounting. With a persistence only system, you have to code all that. Of course if you declared the mapping of associations, you can navigate through them. But the user requirement is to get a set of orders, or a quantity of products in stock, or a subset of customers, which is different from navigating through orders one by one. With a database, you dont need to code anything. With a proper data model what you inserted can be manipulated without its key value. All data that you have inserted can be accessed from any different point of view. And you don’t have to code anything for that. Imagine a Data Access Object with ‘QueryBy methods covering any combination of columns and operators.

A database system does not only store data, it processes data and provide a service to manipulate data.

SQL

SQL is not a language to code how to get the information. SQL only describes what you want. It’s a question you ask to a data service. Same idea as Uber where you enter your destination and desired car and the service manages everything for you: the path, the communication, the paiement, the security. You may not like the SQL syntax, but it can be generated. I’m not talking about generating CRUD statements here, but generating SQL syntax from a SQL semantic expressed in Java or example. There’s a very good example for that: jOOQ (look at the exemples there).

I understand that you can hate SQL for it’s syntax. SQL was build for pre-compilers, not for execution time parsing of text, and I’ll come back on that later with static SQL. But you can’t say that SQL semantic is not modern. It’s a 4th generation language that saves all the procedural coding you have to do with 3rd generation languages. SQL is a declarative language build on a mathematics theory. It goes far beyond the for() loops and if()else.

In SQL you describe the result that you want. How to retrieve the data is done by the database system. The optimizer builds the procedural code (know as the execution plan) and the execution engine takes care of everything (concurrency, maintaining redundant structures for performance, caching, multithreading, monitoring, debugging, etc). Do you really want to code all that or do you prefer to rely on a data service that does everything for you?

You know why developers don’t like SQL? Because SQL has not been designed for programmers. It was for users. The goal was that a non-programmer can ask its question to the system (such as “give me the country of the top customers having bought a specific product in last 3 months”) without the need of a developer. There was no GUI at that time, only Command Line Interface, and SQL was the User Friendly Interface to the database. Today we have GUIs and we don’t need SQL. But it is there so programmers build tools or framework to generate SQL from a programming language. Sure it is ridiculous and it would be better to have a programming language that directly calls the SQL semantic without generating plain old English text. We need a Structured Query Language (SQL) we just don’t need it to be in English.

Set vs loops

So why do people prefer to code everything in procedural language (3GL)? Because this is only what they learned. If at school you learned only loops and comparisons, then you are going to access data in loops. If you learned to think about data as sets, then you don’t need loops. Unfortunately, the set concepts are teached in mathematics classes but not in IT.

Imagine you have to print “Hello World” 5 times. Which pseudo-code so you prefer?


print("Hello World\n")
print("Hello World\n")
print("Hello World\n")
print("Hello World\n")
print("Hello World\n")

or


print ( "Hello World\n" + "Hello World\n" + "Hello World\n" + "Hello World\n" + "Hello World\n" )

I’ve put that in pseudo-code. I don’t want to play with String and StringBuffer here. But the idea is only to explain that if you have to process a set of things it is more efficient to process them as a set rather than one-by-one. That works for everything. And this is where databases rocks: they process sets of rows. If you have to increment the column N by one in every row of your table T, you don’t need to start a loop and increment the column row-by-row. Just ask your RDBMS data service to do it: ‘/* PLEASE */ UPDATE T set N=N+1′. The “please” is in comment because everything that is not there to describe the result is not part of SQL. You have to use hints to force the way to do it, but they are written as comments because SQL do not allow any way to tell how to do it. This was a joke of course, the “please” is not mandatory because we are talking to a machine.

ACID

I’m not sure you get all the magic that is behind:

UPDATE T set N=N+1;

it’s not a simple loop as:

for each row in T
set N=N+1

The RDBMS does more than that. Imagine that there is a unique index on the column N. How many lines of code do you need to do that N=N+1 row by row and be sure that at any point you don’t have duplicates? Imagine that after updating half of the rows you encounter someone else currently updating the same row. You have to wait for his commit. But then, if he updated the value of N, do you increment the past value or the new one? You can’t increment the old one or his modification will be lost. But if you increment the new one, your previous incremented rows are inconsistent because they were based on a previous state of data.
I was talking about having an index. You have to maintain this index as well. You have to be sure that what is in cache is consistent with what is in disk. That modifications made in the cache will not be lost in case of server failure. And if you run in a cluster, those caches must be synchronized.

Coding the same as this “UPDATE T set N=N+1″ in a procedural language is not easy and can become very complex in a multi-user environment.

Maybe you have all the tools you need to generate that code. But if you code it you have to test it. Are your tests covering all concurrency cases (sessions updating or reading same rows, or different rows from same table,…). What is already coded within the database has already been tested. It’s a service and you just have to use it.

Static SQL

I claimed above that SQL is there to be pre-compiled. Yes, SQL is witten in plain text, like most of programming languages, and must be parsed, optimized, compiled. It’s not only for performance. The main reason is that you prefer to get errors at compile time than at runtime. If you put SQL in text strings in your code it will remain text until execution time when it will be prepared. And only then you will get errors. The second reason is that when the SQL is parsed, it is easy to find the dependencies. Want to see all SQL statements touching to a specific column? Do you prefer to do guess on some text search or to methodically follow dependencies?

Yes, SQL is there to be static and not dynamic. That claim may look strange for an Oracle DBA because all statements are dynamic in Oracle. Even at the time of precompilers (such as Pro*C) the statements were parsed but were put as text in the binary. And at first execution, they are parsed again and optimized. If you want the execution plan to be defined at deployment time, you have to use Outlines or SQL Plan Baselines. There is no direct way to bind the execution plan at deployment time in Oracle. In my opinion the static SQL as it is known on DB2 for example is really missing in Oracle. OLTP Software Vendors would love to ship the optimized execution plans with their application. Imagine that all SQL statements in an OLTP application are parsed and optimized, compiled as bound procedures, similar to stored procedures, with procedural access (the execution plan) and you just have to call them. For reporting, DSS, BI you need the plans to adapt to the values and volume of data, but for OLTP you need stability. And from the application, you just call those static SQL like a data service.

Talking about procedural execution stored in the database, I’m coming to stored procedures and PL/SQL of course.

Stored Procedures

When you code in your 3GL language, do you have functions that update global variables (BASIC is the first language I learned and this was the idea) or do you define classes which encapsulate the function and the data definition? The revolution of Object Oriented concepts was to put data and logic at the same place. It’s better for code maintainability with direct dependency procedural code and data structures. It’s better for security because data is accessible only through provided methods. And it’s better for performance because procedural code access data at the same place.

Yes Object Oriented design rocks and this why you need to put business logic in the database. Putting the data on one component and running the code on another component of an information system is the worst you can do. Exactly as if in your Object Oriented application you store the object attributes on one node and run the methods on another one. And this is exactly what you do with the business logic outside of the database. Your DAO objects do not hold the data. The database does. Your objects can hold only a copy of the data, but the master copy where are managed concurrency management, high availability and persistance is in the database.

We will talk about the language later, this is only about the fact that the procedural code run in the same machine and the same processes than the data access.
There are a lot of myths about running business logic in the database. Most of them come from ignorance. Until last Monday I believed that one argument against running business logic in the database was unbeatable: You pay Oracle licences on the number of CPU, so you don’t want to use the database CPUs to run something that can run on a free server. I agreed with that without testing it, and this is where myths come from.

But Toon Koppelaars has tested it and he proved that you use more database CPU when you put the business logic outside of the database. I hope his presentation from Oak Table World 2016 will be available soon. He proved that by analyzing exactly what is running in the database, using linux perf and flame graphs: https://twitter.com/ChrisAntognini/status/778273744242352128

All those rountrips from remote compute server, all those row-by-row processing coming from that design have an higher footprint on the database CPUs that directly running the same on the database server.

PL/SQL

Running business logic on the database server can be done with any language. You can create stored procedures in Java. You can code external procedures in C. But those languages have not been designed for data manipulation. It is sufficient to call SQL statements but not when you need procedural data access. PL/SQL is a language made for data processing. It’s not only for stored procedure. But it’s the only language that is coupled with your data structure. As I said above, it’s better to think in sets with SQL. But it may be sometimes complex. With PL/SQL you have a procedural language which is intermediate between row-by-row and sets because it has some bulk processing capabilities.

In pseudo-code the Hello World above is something like that:


forall s in ["Hello World\n","Hello World\n","Hello World\n","Hello World\n","Hello World\n"] print(s)

It looks like a loop but it is not. The whole array is passed to the print() function and loop is done at lower level.

In PL/SQL you can also use pipeline functions where rows are processed with a procedural language but as a data stream (like SQL does) rather than loops and calls.

I’ll go to other advantages of PL/SQL stored procedures but here again there is one reason frequently raised to refuse PL/SQL. You can find more developers in .Net or Java than in PL/SQL. And because they are rare, they are more expensive. But there is a counter argument I heard this week at Oracle Open World (but I don’t remember who raised that point unfortunately). PL/SQL is easy to learn. Really easy. You have begin – exception – end blocks, you declare all variables, you can be modular with procedures and inline procedures, you separate signature and body, you have very good IDE, excellent debugger and easy profiler,… and you can run it on Oracle XE which is free. So, if you have a good Java developer he can write efficient PL/SQL in a few days. By good developer, I mean someone who understands multi-user concurrency problems, bulk processing, multi-threading, etc.

There are less PL/SQL developers than Java developers because you don’t use PL/SQL. It’s not the opposite. If you use PL/SQL you will find developers and there are many software vendors that code their application in PL/SQL. Of course PL/SQL is not free (except in Oracle XE) but it runs on all platforms and on all editions.

Continuous Integration and Deployment, dependency and versioning

I come back quickly to the advantages of using a language that is coupled with your data.

PL/SQL stored procedures are compiled and all dependencies are stored. With one query on DBA_DEPENDENCIES you can know which tables your procedure is using and which procedures use a specific table. If you change the data model, the procedures that have to be changed are immediately invalidated. I don’t know any other language that does that. You don’t want to break the continuous integration build every time you change something in a table structure? Then go to PL/SQL.

Let’s go beyond continuous integration. How do you manage database changes in continuous deployment? Do you know that with PL/SQL you can modify your data model online, with your application running and without breaking anything? I said above that procedures impacted by the change are invalidated and the must be adapted to be able to be compiled. But this is only for the new version. You can deploy a new version of those procedures while the previous version is running. You can test this new version and only when everything is ok you switch the application to the new version. The feature is called Edition Based Redefinition (EBR) it exists since 11g in all Oracle editions. It’s not known and used enough, but all people I know that use it are very happy with it.

In development environment and continuous integration, it is common to say that the database always cause problem. Yes it is true but it’s not inherent to the database but the data. Data is shared and durable and this is what makes it complex. The code can be deployed in different places, and can be re-deployed if lost. Data can be updated at only one place and visible to all users. Upgrading to a new version of application is easy: you stop the old version and start the new version. For data it is different: you cannot start from scratch and you must keep and upgrade the previous data.

Object-Relational impedance

I’m probably going too far in this blog post but the fact that data is shared and durable is the main reason why we cannot apply same concepts to data objects (business objects) and presentation objects (GUI components). Application objects are transient. When you restart the application, you create other objects. The identity of those objects is an address in memory: it’s different on other systems and it’s different once application is restarted. Business objects are different. When you manipulate a business entity, it must have the same identity for any users, and this identity do not change when application is restarted, not even when application is upgraded. All other points given as “object-relational impedance” are minor. But the sharing and durability of business object identity is the reason why you have to think differently.

Where to put business logic?

If you’re still there, you’ve probably understood that it makes sense to run the data logic in the database, with declarative SQL or procedural PL/SQL stored procedures, working in sets or at least in bulk, and with static SQL as much as possible, and versioned by EBR.

Where to put business logic then? Well, business logic is data logic for most of it. But you’ve always learned that business logic must be in the application tier. Rather than taking reasons given one by one and explain what’s wrong with them, let me tell you how came this idea of business logic outside of the database. The idea came from my generation: the awesome idea of client/server.

At first, data was processed on the servers and only the presentation layer was on the user side (for example ISAM was very similar with what we do with thin web pages). And this worked very well, but it was only green text terminals. Then came PCs and Windows 3.11 and we wanted graphical applications. So we built applications on our PCs. But that was so easy that we implemented all business logic there. Not because it’s a better architecture, but because anyone can build his application without asking to the sysops. This was heaven for developers and a nightmare for operations to deploy those applications on all the enterprise PCs.
But this is where offloading business logic started. Application written with nice IDEs (I did this with Borland Paradox and Delphi) connecting directly to the database with SQL. Because application was de-correlated from the database everything was possible. We even wanted to have applications agnostic of the database, running in any RDBMS. Using standard SQL and standard ODBC. Even better: full flexibility for the developer by using only one table with Entity-Value-Attribute.

Actually, the worst design anti-patterns have been invented at that time and we still see them in current applications – totally unscalable.

When finally the deployment of those client/server applications became a nightmare, and because internet was coming with http, html, java, etc. we went to 3-tier design. Unfortunately, the business logic remained offloaded in the application server instead of being part again of the database server.

I mentioned ODBC and it was another contributor to that confusion. ODBC looks like a logical separation of the application layer and the database layer. But that’s wrong. ODBC is not a protocol. ODBC is an API. ODBC do not offer a service: it is a driver running on both layers and that magically communicates through network: code to process data on one side and data begin on the other.

A real data service encapsulates many SQL statements and some procedural code. And it is exactly the purpose of stored procedures. This is how all data applications were designed before that client/server orgy and this is how they should be designed today when we focus on centralization and as micro-services applications.

So what?

This blog post is already too long. It comes from 20 years experience as developer, application DBA, and operation DBA. I decided to write this when coming back from the Oracle Open World where several people are still advocating for the right design, especially Toon Koppelaars about Thick Database at Oak Table World and the amazing panel about “Thinking clearly about application architecture” with Toon Koppelaars, Bryn Llewellyn, Gerald Venzl, Cary Millsap, Connor McDonald

The dream of every software architect should be to attend that panel w/ @ToonKoppelaars @BrynLite @GeraldVenzl @CaryMillsap @connor_mc_d pic.twitter.com/npLzpnktMK

— Franck Pachot (@FranckPachot) September 22, 2016

Beside the marketing stuff, I was really impressed by the technical content around the Oracle Database this year at OOW16.

 

Cet article Modern software architecture – what is a database? est apparu en premier sur Blog dbi services.

Wiederholung APEX 5.0 Best Practices

Denes Kubicek - Fri, 2016-09-23 10:30
Wegen der großen Nachfrage veranstalten wir jetzt einen Zusatztermin für den APEX 5.0 Best Practice Kurs am 21.Nov - 23. Nov. 2016.

Anmeldung zum Kurs

Categories: Development

#OOW16: Recap in Photos!

WebCenter Team - Fri, 2016-09-23 05:44

Even if you attended all 5 days of Oracle OpenWorld 2016 this past week, there was WAY more content and goings on over the span of 5 days than one could possibly hope to keep track of.  But if we had to sum it all up in one word for our customers and partners, it would be "Cloud". We want to sincerely thank everyone who attended and supported us before, during and after the event. We look forward to continuing to work together to expand on our Content and Experience Management solutions. Here are some photos to recap the week!

Executed PL/SQL kept in Stored procedure but while compile Compilation error is coming.

Tom Kyte - Fri, 2016-09-23 04:46
Hi Tom, Below PL/SQL is working fine. while same code of PL/SQL as below is used in Stored procedure compilation error is coming while compiling stored procedure. Could you please make a suggestion to keep below PL/SQL in below Stored Procedure so...
Categories: DBA Blogs

How to compare two tables of data????

Tom Kyte - Fri, 2016-09-23 04:46
Hi Tom, I have two tables of values(dept1 and dept2). How do I compare all the data in these two tables??? It will return true if both tables contain the same data & false for otherwise... Another thing is that I do not know how to use CREATE OPE...
Categories: DBA Blogs

Java Connection Pooling with Oracle VPD

Tom Kyte - Fri, 2016-09-23 04:46
Hi Tom, We have a 3-tier application that is built on Java and Oracle. In our application, we extensively make use of Oracle VPD policies for setting contexts and managing the data. Now, we are building in Java something on top of Oracle. We hit ...
Categories: DBA Blogs

PlSQL- Bulk Collect and Update (Better Approach)

Tom Kyte - Fri, 2016-09-23 04:46
Hi Tom, I am looking for a better coding approach than what I have in my current system. I have two tables dog_owner(16 Million Records) and dog_owner_stage(8 Million Records). In the current process. I usually insert based on a common owner_accou...
Categories: DBA Blogs

Performance issue in CLOB\BLOB data migration

Tom Kyte - Fri, 2016-09-23 04:46
(did not get any answer for https://asktom.oracle.com/pls/apex/f?p=100:24:0::NO::P24_ID:9531842300346462307 ) Hello Tom, First of all, i would like you to thank you for your immense support on Database issues.It helps us a lot !! Question : M...
Categories: DBA Blogs

Compile_Error when refreshing a Materialized View from a procedure

Tom Kyte - Fri, 2016-09-23 04:46
We have Materialized Views which reference tables in other schemas. We can refresh/compile the Materialized Views from the command line however when we refresh/compile the Materialized View from within a procedure the job immediately aborts with...
Categories: DBA Blogs

Practise question

Tom Kyte - Fri, 2016-09-23 04:46
Hi, I was practicing some question on sql challenge about dml operation using multiple tables and got some doubt. CREATE TABLE plch_departments ( department_id INTEGER PRIMARY KEY, department_name VARCHAR2(30) ) / CREATE TABLE p...
Categories: DBA Blogs

Optimiser Trace

Tom Kyte - Fri, 2016-09-23 04:46
Hi Tom, A tricky question, recently we upgraded our systems to 11.2.0.4 and started to observe some queries taking much longer (from mins to 10+ hours). On analysing the taces / explain we found the access path had changed from the previous one, so...
Categories: DBA Blogs

Converting an EE DB 12.1.0.2.0 to SE DB 12.1.0.1.0 - Version - empty table

Tom Kyte - Fri, 2016-09-23 04:46
Hi, Im trying to convert EE 12.1.0.2.0 to SE 12.1.0.1.0 with expdp/impdp. I've found example of converting EE 11.2.0.1 to SE 11.2.0.4 where it is stated: "During import to standard edition we must use keyword VERSION=11.1 to be able to import e...
Categories: DBA Blogs

Oracle Open World 2016 – Day 4 and 5

Yann Neuhaus - Fri, 2016-09-23 03:01

At the end of Oracle Open World my last BLOG concerning OOW 2016 covering day 4 and 5:

Wednesday is the day of the Party: Oracle’s appreciation event, a concert with Gwen Stefani and Sting at the AT&T Park (Stadium of the San Francisco Baseball team, the Giants). It was a great event with awesome musicians.

Before the party I visited the session “Oracle Active Data Guard: Power, Speed, Ease and Protection” provided by Larry M. Carpenter, the grandfather of Data Guard. Here a couple of nice new features of (Active) Data Guard in 12gR2:

  • Multi-Instance Redo Apply in RAC: Up until now the managed recovery processes (MRP) could only run on a single node on the standby RAC site and hence limited redo apply to the CPU and IO power of that one node. In the new release a coordinator process can distribute the redo data to MPR-processes on all nodes of the RAC cluster. This is called Multi-Instance Apply and is configured as follows:
    Without broker: recover managed standby database disconnect using instances 4;
    With broker: Through the ‘ApplyInstances’ property.
    Caveats in the first 12gR2 release:
    Using Multi-Instance Redo Apply disallows the use of the new feature In-Memory Column Store on Active Data Guard.
    RMAN block change tracking file is disabled.

 

  • Data Guard Broker enhancement for Multitenant: As Redo is generated at Container (CDB) level, switchover and failover will also happen for the whole CDB. In 12gR2 there is a new command when using the Data Guard Broker to migrate or failover a Pluggable DB (PDB) to another CDB on the same server:

    MIGRATE PLUGGABLE DATABASE PDBx TO CONTAINER CDB2 USING PDBx.xml CONNECT AS sys/mypassword@CDB2;

    Depending on what role the CDB I’m connected to has determines if a PDB is migrated to another CDB or is failed over to another CDB. So if e.g. a PDB has a failure on the primary site then I can “failover” its standby-equivalent to another Primary CDB on the standby machine and hence make the “standby PDB” a “primary PDB”. This works best when having 2 CDBs in 2 sites and replicate in opposite directions: CDB1 at site A replicates to CDB1 at site B. CDB2 at site B replicates to CDB2 at site A. So let’s assume PDBx in Primary DB CDB1 fails at site A. You can then migrate PDBx at site B to Primary DB CDB2. PDBx at site A in CDB1 will be dropped automatically, but the CDB2 at site A needs to be manually updated with the new data files of PDBx.

 

  • Use In-Memory Column Store on an Active Data Guard DB: As mentioned in my previous BLOG, in 12cR2 In-Memory can be used on an Active Data Guard Instance.
    Restrictions for In-Memory on Active Data Guard:
    In-Memory expressions are captured based on queries executed on the primary only. I.e. the expression statistics store (ESS) is maintained on the primary only.
    Automatic Data Optimization (ADO) policies are triggered only on access recorded on the primary database.
    In-Memory Fast-Start and In-Memory Join-Groups are not supported in an Active Data Guard

 

  • Diagnostics and Tuning for Active Data Guard: The Diagnostics Pack (AWR), the Tuning Pack features and SQL Plan Analyzer are supported in the new release on Active Data Guard.
    AWR: In an AWR catalog database the Active Data Guard DB is registered. From there remote snapshots can be taken from the Active Data Guard instance and stored in the AWR catalog: dbms_workload_repository.create_remote_snapshot("TYPICAL", ADG-id);
    SQL Tuning Advisor: All SQL Tuning Advisor Tasks are executed on the Active Data Guard instance. Necessary write activity are done through a DB-Link on the primary DB.

 

  • Repair blocks from NOLOGGING-operations: Blocks from NOLOGGING operations on primary can now be validated and repaired on Standby with rman commands:

    validate/recover ... nonlogged blocks;

    I.e. the primary DB does not necessarily need to be in FORCE LOGGING mode anymore. If NOLOGGING operations are necessary then they can be repaired on the Standby-DB. Previously complete datafiles had to be restored to repair NOLOGGING operations.

On Thursday I visited the panel discussion with the subject “Thinking clearly about Database Application Architecture”. Toon Koppelaars, Connor Mcdonald, Cary Milsap and Gerald Venzl discussed about the correct Application architecture when accessing data in an Oracle Database. The discussion was moderated by Bryn Llewellyn. Toon Koppelaars from the Real World Performance team at Oracle explained why the ThickDB approach by writing business logic (which need data processing) in PLSQL through set or bulk processing is the best method to have a well performing application (see also here). However, today the approach to process the data in layers outside the DB is being preferred (“data to processing” instead of “processing to data”). Unfortunately that results in row by row processing with lots of network roundtrips and higher CPU-usage on the DB-server due to the many times the whole stack on the DB-server has to be traversed.
It was clear and agreed in the audience that the ThickDB approach (“processing to data”) is correct, but why do developers not change their behavior since many years? The opinions on that differed, but also critical statements were expressed that “we as DBAs and DB-Consultants are part of the problem”, because there is no effort to change something in the base education of students to better understand the inner workings of a relational database system and the importance of “processing at the data”.

I’ll leave it to the reader to think about that and end my BLOGs about the Oracle Openworld 2016.

 

Cet article Oracle Open World 2016 – Day 4 and 5 est apparu en premier sur Blog dbi services.

Hadoop on IaaS - part 2

Pat Shuff - Fri, 2016-09-23 02:07
Today we are going to get our hands dirty and install a single instance standalong Hadoop Cluster on the Oracle Compute Cloud. This is a continuing series of installing public domain software on Oracle Cloud IaaS. We are going to base our installation on three components We are using Oracle Linux 6.7 because it is the easiest to install on Oracle Compute Cloud Services. We could have done Ubuntu or SUSE or Fedora and followed some of the tutorials from HortonWorks or Cloudera or Apache Single Node Cluster. Instead we are going old school and installing from the Hadoop home page by downloading a tar ball and configuring the operating system to run a single node cluster.

Step 1:

Install Oracle Linux 6.7 on an Oracle Compute Cloud instance. Note that you can do the same thing by installing on your favorite virtualization engine like VirtualBox, VMWare, HyperV, or any other cloud vendor. The only true dependency is the operating system beyond this point. If you are installing on the Oracle Cloud, go with the OL_67_3GB..... option, go with the smallest instance, delete the boot disk, replace it with a 60 GB disk, rename it and launch. The key reason that we need to delete the boot disk is that by default the 3 GB disk will not take the Hadoop binary. We need to grow it to at least 40 GB. We pad a little bit with a 60 GB disk. If you check the new disk as a boot disk it replaces the default Root disk and allows you to create an instance with a 60 GB disk.

Step 2:

Run yum to update the os, install w get, and java version 1.8. You need to login as opc to the instance so that you can run as root.

Note that we are going to diverge from the Hadoop for Dummies that we referenced yesterday. They suggest attaching to a yum repository and doing an install from the repository for the bigtop package. We don't have that option for Oracle Linux and need to do the install from the binaries by downloading a tar or src image. The bigtop package basically takes the Apache Hadoop bundle and translates them to rpm files for an operating system. Oracle does not provide this as part of the yum repository and Apache does not create one for Oracle Linux or RedHat. We are going to download the tar file from the links provided at Apache Hadoop homepage we are following install instructions for a single node cluster.

Step 3:

Get the tar.gz file by pulling it from http://apache.osuosl.org/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz

Step 4: We unpack the tar.gz file with the tar xvzf hadoop-2.7.2.tar.gz command

Step 5:

Next we add the following to the .bashrc file in the home directory to setup some environment variables. The java code is done in the same location by the yum command. The location of the hadoop code is based on downloading into the opc home directory.

export JAVA_HOME=/usr
export HADOOP_HOME=/home/opc/hadoop-2.7.3
export HADOOP_CONFIG_DIR=/home/opc/hadoop-2.7.3/etc/hadoop
export HADOOP_MAPRED_HOME=/home/opc/hadoop-2.7.3
export HADOOP_COMMON_HOME=/home/opc/hadoop-2.7.3
export HADOOP_HDFS_HOME=/home/opc/hadoop-2.7.3
export YARN_HOME=/home/opc/hadoop-2.7.3
export PATH=$PATH:$HADOOP_HOME/bin

Step 6

Source the .bashrc to pull in these environment variables

Step 7 Edit the /etc/hosts file to add namenode to the file.

Step 8

Setup ssh so that we can loop back to localhost and launch an agent. I had to edit the authorized_keys to add a return before the new entry. If you don't the ssh won't work.

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
vi ~/.ssh/authorized_keys
ssh localhost
exit

Step 9 Test the configuration then configure the hadoop file system for single node.

cd $HADOOP_HOME
mkdir input
cp etc/hadoop/*.xml input
./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'
vi etc/hadoop/core-site.xml

When we ran this and there were a couple of warnings which we can ignore. The test should finish without error and generate a long output list. We then edit to core-site.xml file by changing the following lines at the end. (omit the spaces, the blog software masked them and the only way to show the full file was to add spaces)

< configuration >
 < property >
  < name >fs.defaultFS< /name >
  < value >hdfs://namenode:8020< /value >
 < /property >
< /configuration >

Step 10

Create the hadoop file system with the command hdfs namenode -format

Step 11

Verify the configuration with the command hdfs getconf -namenodes

Step 12

Start the hadoop file system with the command sbin/start-dfs.sh

At this point we have the hadoop filesystem up and running. We now need to configure MapReduce and test functionality. Step 13

Make the HDFS directories required to execute MapReduce jobs with the commands

  hdfs dfs -mkdir /user
  hdfs dfs -mkdir /user/opc
  hdfs dfs -mkdir input
  hdfs dfs -put etc/hadoop/*.xml input

Step 14 Run a MapReduce example and look at the output

  hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep 
    input output 'dfs[a-z.]+'
  hdfs dfs -get output output
  cat output/* output/output/*

Step 15

Create a test program to do a wordcount of two files. This example comes from an Apache MapReduce Tutorial

hdfs dfs -mkdir wordcount
hdfs dfs -mkdir wordcount/input
mkdir ~/wordcount
mkdir ~/wordcount/input
vi ~/wordcount/input/file01
 - add 
Hello World Bye World
vi ~/wordcount/input/file02
- add
Hello Hadoop Goodbye Hadoop
hdfs dfs -put ~/wordcount/input/* wordcount/input
vi ~/wordcount/WordCount.java

Create WordCount.java with the following code

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

Step 16

Compile and run the WordCount.java code

cd ~/wordcount
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64
export HADOOP_CLASSPATH=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64/lib/tools.jar
hadoop com.sun.tools.javac.Main WordCount.java
jar cf wc.jar WordCount*.class
hadoop jar wc.jar WordCount wordcount/input wordcount/output
hadoop fs -cat wordcount/output/part-r-00000

At this point we have a working system and can run more MapReduce jobs, look at results, and play around with Big Data foundations.

In summary, this is a relatively complex example. We have moved beyond a simple install of an Apache web server or Tomcat server and editing some files to get results. We have the foundations for a Big Data analytics solution running on the Oracle Compute Cloud Service. The steps to install are very similar to the other installation tutorials that we referenced earlier on Amazon and Virtual Machines. Oracle Compute is a good foundation for public domain code. Per core the processes are cheaper than other cloud vendors. Networking is non-blocking and higher performance. Storage throughput is faster and optimized for compute high I/O and tied to the compute engine. Hopefully this tutorial has given you the foundation to start playing with Hadoop on Oracle IaaS.

Links for 2016-09-22 [del.icio.us]

Categories: DBA Blogs

NetFlix – Blockbuster Movies

Marco Gralike - Thu, 2016-09-22 23:08
I mean, in principle it’s not that special, being that the info is already on…

Oracle Open World 2016 from a PeopleSofter point of view: Thursday 22nd and Wrap Up

Javier Delgado - Thu, 2016-09-22 18:33
So Open World 2016 has come to an end. But before the curtains fell, there was still some activity and interesting PeopleSoft sessions to attend.

Reporting and AnalyticsMy day started with a session name Getting the most Out of PeopleSoft - Reporting and Analytics [CON7075] with the participation of Matthew Haavisto, Jody Schnell and Ramasimha Rangaraju.

Reporting has evolved a lot in the last few years, and not only in PeopleSoft. Gone are (or should be) the days in which a report meant a PDF or a print out. Today reporting is not only interactive but also actionable. I actually delivered a presentation on this topic back in April 2016 at the PeopleSoft Tour in Madrid. I later recorded it in YouTube, but unfortunately it is only available in Spanish.



PeopleSoft is not an exception to this reporting evolution. Tools like Pivot Grids, actionable charts and Simplified Analytics all point to the same direction. Unfortunately, not all users are ready for this transition, as I have heard from many customers that upper management do not want to use a digital device to access the reports, so they still prefer the printed alternatives. And yes, I'm writing this as of September 2016.

Anyway, going back to the session, there were some points that I found particularly interesting:

  • The ability in PeopleTools 8.55 to generate submittable PDFs using BI Publisher. This functionality is particularly useful for government forms, but can also be used to gather and process ad-hoc data from users.
  • Oracle JET has been adopted as the charting engine, giving PeopleSoft a more consistent user experience with other Oracle products. Given the amount of development effort dedicated to Oracle JET charting features, PeopleSoft may take a quick benefit a rapidly evolve its charting capabilities.
  • The introduction of Self Service Scheduling simplifies the execution of reports by linking them to pages and hiding the complexity of run controls to users.

Another point I found interesting was the explanation of how macros adversely affect PS/nVision performance, as they require PeopleSoft to execute them twice, first using the Microsoft recommended openXML method and then, as the first does not support macros, using the traditional PeopleSoft Excel automation. Interesting to know!



Meet the PeopleTools ExpertsThe next session was one of my preferred ones, as it consists of several round tables where you can directly talk to the PeopleTools development team. It is also useful to hear the concerns and doubts of customers and partners.

There were plenty of questions about Fluid User Interface, Cloud Architecture, Lifecycle Management and so on. If you ever attend Oracle Open World in the future, I strongly recommend this session.

PeopleTools Product Team Panel DiscussionJust after lunch, the next session was this classic of Oracle Open World. It consists in an open discussion between the PeopleTools Product Team and customers and partners. It is always interesting to attend this type of sessions and listed to thoughts and ideas from the PeopleSoft community.


Monitoring and Securing PeopleSoft My last session was Hands-On with PeopleSoft: Develop Practices to Harden and Protect PeopleSoft [CON7074] delivered by Greg Kelly. The presentation was basically around the Securing Your PeopleSoft Application Environment document available here. I found it really illustrative, and taking into account my rather shallow knowledge of security, rather scary :). Next time I will make sure I prepare myself upfront to take more advantage of Greg's wide knowledge on the area.

Wrap UpThis was an interesting edition of Oracle Open World in what is related to PeopleSoft. There were not many announcements made since the last edition. Still, I think the PeopleSoft team at Oracle is doing a great job. This is still a great product indeed.

On the other hand, I have the feeling that PeopleSoft customers are lagging behind in terms of adoption of new features. Now, personally I don't think this is because the update of PeopleSoft to the latest features is complex. Actually, we can say that with Selective Adoption, DPK and other Lifecycle Management tools it has never been this easy to update. The barrier, in my opinion, is not in the product, but in marketing. All Oracle marketing and sales horsepower has been exclusively dedicated during the last years to their cloud offering. Under these circumstances, it is reasonable to have uncertainties about how wise is to perform future investment in PeopleSoft as opposed to moving to the cloud. And we know uncertainty does not accelerate investment...

From a more personal standpoint, this was great event in terms of networking. Being able to meet PeopleSoft talents such as Jim Marion, Graham Smith, Sasank Venama and many others including the PeopleSoft development team is always the best way to nurture and trigger new ideas.

Just my two cents. Thanks for following this blog during this event!

PS: I bought Jim Marion's book from the bookshop at Moscone South and have a good deal of fun and learning guaranteed for my flight back to Madrid.



OOW 2016: nouveautés base de donnée

Yann Neuhaus - Thu, 2016-09-22 14:20

Voici quelques infos sur ce qui a été annoncé ou présenté à l’Oracle Open World. L’info est relayée un peu partout principalement en anglais, donc voici un résumé à l’attention des francophones

Oracle Database 12c Release 2

Soyons clair, la base de donnée n’est pas le sujet principal de l’Open World. Comme prévu, c’est une sortie ‘Cloud First’ mais la version rendu publique lundi est une version limitée.
Si vous avez utilisé le ‘Schema as a Service’ c’est un peu la même idée sauf qu’il s’agit de ‘PDB as a Service’ ici. En multitenant, la consolidation par Schema est remplacée par la consolidation par PDB qui a l’avantage de présenter virtuellement une base complète, avec ses objects publics, ses multiples schemas, etc.
Donc pas d’accès d’administration: c’est un service “managed” – administré par Oracle.
Le multitenant permet de donner des droits DBA sur une PDB tout en empêchant d’interagir avec le reste du système. Ce sont des nouvelles fonctionnalités de la 12.2, entre autres les “lockdown profiles” qui ont été développées dans ce but.
Le service s’appelle “Exadata Express Cloud Service” car il tourne sur Exadata (donc compression HCC et bientôt SmartScan). La plupart des options sont d’ailleurs disponibles (In-Memory, Advanced Compression,…)
“Express” est pour la facilité et rapidité de provisonning: quelques minutes. Le but est qu’un développeur puisse en 5 minutes créer un service base de donnée facilement accessible (par SQL*Net encrypté). L’idée c’est qu’il soit aussi facile pour un développeur de créer une base Oracle que de créer des bases Postgres, Cassandra, MongoDB,…
Et bien sûr si on met toutes les option, le développeur va les utiliser et elles deviendront nécessaires en production.

Il y aura bientôt un Data Center en Europe. Pour le moment, c’est seulement aux USA. Le prix est attractif (CHF 170 par mois) mais la base est assez limitée en terme de CPU, stockage et mémoire. C’est principalement pour du développement et du bac à sable.

Donc la 12c Release 2 pour le moment n’est disponible que sous la forme de PDBaaS sur Exadata Express Cloud Service:

EXCS

Avant la fin de l’année, on devrait avoir la 12.2 en DBaaS (non-managed que l’on connait actuellement sir le PaaS Oracle) et la version General Availability viendra ensuite, probablement en 2017

 

Cet article OOW 2016: nouveautés base de donnée est apparu en premier sur Blog dbi services.

OBIEE12c - Upgrading to Version 12.2.1.1

Rittman Mead Consulting - Thu, 2016-09-22 10:36

INTRODUCTION

The new version of OBIEE 12c, 12.2.1.1 to be exact, is out, so let’s talk about it. It’s my intent that after reading this, you can expect some degree of comfort in regards to possibly doing this thing yourself (should you find yourself in just such a circumstance), but if not, feel free to drop us a line or give us a ring. It should be noted that Oracle documentation explicitly indicates that you’re going to need to upgrade to OBIEE version 12.2.1.0, which is to say you’re going to have to bring your 11g instance up to 12c before you can proceed with another upgrade. A colleague here at RM and I recently sat down to give the upgrade process (click there for the Oracle doc) a go on one of our hosted windows servers, and here’s the cut and dry of it. The examples throughout will be referencing both Linux and Windows, so choose how you’d like. Now, if you’ve gone through the 12c install process before, you’ll be plenty familiar with roughly 80% of the steps involved in the upgrade. Just to get this out of the way, no, it’s not a patch (in the sense that you’re actually going through the OBIEE patching process using OPatch). In fact, the process almost exactly mirrors a basic 12c install, with the addition of a few steps that I will make darn sure we cover in their entirety below. Speaking of which, I’m not going to do a play-by-play of the whole thing, but simply highlight those steps that are wholly unfamiliar. To provide some context, let’s go through the bullet points of what we’ll actually be doing during the upgrade.

  1. First, we’ll make sure we have a server appropriate, supported version of java installed (8_77 is the lowest version) and that this guy corresponds to the JAVA_HOME you’ve got set up.

  2. Next, we’ll be running the install for the WebLogic server into a NEW oracle home. That’s right, you heard me. A. new. oracle. home.

  3. After that, we’ll be running a readiness check to make sure our OBIEE bits won’t run into any trouble during the actual upgrade process. This checks all OBIEE components, including those schemas you installed during the initial install process. Make sure to have your application database admin credentials on hand (we’ll talk about what you need below in more detail). The end of this step will actually have us upgrade all those pieces the readiness checker deems worthy of an upgrade.

  4. Next, we’ll reconfigure and upgrade our existing domain by running the RECONFIGURATION WIZARD!!!!! and upgrade assistant, respectively.

  5. Lastly, we’ll start up our services, cross our fingers, hold onto our four leaf clovers, etc.. (just kidding, at least about that last part).

Before we begin, however, let’s check off a few boxes on the ‘must have’ list.

  • Download all the files here, and make sure you get the right versions for whatever kind of server your version of OBIEE hangs out in. The java version will be 8_101 which will work out just fine even though the minimum needed is 8_77.

  • Get those database credentials! If you don’t know, drop everything and figure out how you’re going to access the application database within which the OBIEE 12c schemas were installed. You’ll need the user name/pass for the SYS user (or user with SYS privileges), and the database connection string as well, including the service name, host, and port.

  • Make sure you have enough disk space wherever you’re installing the upgrade. The downloads for the upgrade aren’t small. You should have at least 150GB, on a DEV box, say. You don’t want to have to manage allocating additional space at a time like this, especially if it involves putting in a ticket with IT (wink wink)! Speaking of which, you’ll also need the server credentials for whichever user 12c was installed under. Note that you probably don’t need root if it was a linux machine, however there have been some instances where I’ve needed to have these handy, as there were some file permission issues that required root credentials and were causing errors during an install. You’ll also need the weblogic/obiee admin user (if you changed the name for some reason).

  • Lastly, make sure you’re at least a tad bit familiar with both the path to the oracle and to the domain home.

SETTING UP JAVA

After downloading the version of Java you need, go ahead update it via the .rpm or .exe, etc… Make sure to update any environment variables you have set up, and to update both the JAVA_HOME variable AND the PATH to reference the new Java location. As stated above, at the time of this blog, the version we used, and that is currently available, is 8_101. During the upgrade process, we got a warning (see below) about our version not being 8_77. If this happens to you, just click Next. Everything will be alright, promise.

Java Version Warning

A NEW HOME FOR ORACLE

Did you click the link to the upgrade doc yet? If not, do so now, as things are about to get kind of crazy. Follow along as we walk through the next steps if you’d like. First, stop services and disable the SSL like it tells you to. Then, start OBIEE services back up and then run the infrastructure jar (java -jar fmw_12.2.1.1.0_infrastructure.jar) for the WebLogic server install. Again, I’m not going to go pic by pic here as you can assume most everything resembles the initial 12c install process, and this part is no different. The one piece of this puzzle we need to focus on is establishing a new oracle home. After skipping those auto updates, move onto step 3 where we are, in fact, going to designate a new oracle home. You’ll see that, after completing the WebLogic install, we’ll have a bunch of updated feature sets, in addition to some new directories in our 12.2.1.1 oracle home. For example, if your original home is something like:

/u01/app/oracle/fmw

change it to:

New Oracle Home

when it asks you to enter a new one.

Breeze through the rest of the steps here, and remember to save that response file!

UPDATING OBIEE

Unzip both of the fmw_12.2.1.1.0_bi_linux64_Disk#_#of2.zip files, making sure that your OBIEE install files are in the same directory. For windows, this will be the executable from the first zip file, and the zip file from the second part of disk 1. Execute the binary (on linux) or .exe, going through the usual motions and then in step 3, enter the NEW oracle home for 12.2.1.1. In the example above, it would be:

/u01/app/oracle/fmw2

for Linux, and likewise, for Windows:

Enter Existing Oracle Home

Again, there isn’t too much to note or trap you here beyond just making sure that you take special care not to enter your original oracle home, but the one you created in the previous section. Proceed through the next steps as usual and remember, save your response file!

UPDATING THE 12C SCHEMAS - USING THE READINESS CHECKER AND UPDATE ASSISTANT

Run the readiness checker from:

NEW_ORACLE_HOME/oracle_common/upgrade/bin/ua -readiness

This next series of steps will take you through all the schemas currently deployed on your application database and confirm that they won’t explode once you take them through the upgrade process. In step 2 of 6, make sure that you’re entering the port for EM/Console (9500 by default). Remember when I said you’re going to need the DB credentials you used to install 12c in the first place? Well here’s where we’re going to use them. The readiness checker will guide you through a bunch of screens that essentially confirms the credentials for each schema installed, and then presents a report detailing which of these will actually get upgraded. That is to say, there are some that won’t be. I really like this new utility as an extra vote of confidence for a process that can admittedly be oftentimes troublesome.

Readiness Checker

Readiness Report

Once you’ve validated that those schemas ready for update, go ahead and stop OBI12c services using the EXISTING oracle home.

Pro tip: they’ve made it super easy to do this now by just pointing your bash_profile to the binaries directory in OBIEE’s bitools folder (ORACLE_HOME/user_projects/domains/bi/bitools/bin). After logging this entry in your profile, you can simply type start.sh or stop.sh to bring everything up or down, not to mention take advantage of the myriad other scripts that are in there. Don't type those paths out every time.

I digress… After the services come down, run the upgrade assistant from within the NEW oracle home, as below:

Citing the previous example:

NEW_ORACLE_HOME/oracle_common/upgrade/bin/ua

After bringing up the install dialogue box, move on to step 2, and select the All Schemas Used by a Domain option (as in the example above), unless of course you’d like to hand select which ones you’d like to upgrade. I suppose if you were thinking about scrapping one you had previously installed, then this would be a good option for you. Make sure the domain directory you specify is from your existing/old 12c instance, as below:

Upgrade Assistant-Existing Domain

Move through the next series of steps, which are more or less self explanatory (no tricks here, promise), once again validating connection credentials until you get to step 12. As always, save the response file, select Upgrade, and then watch the magic happen,….hopefully. Congratulations, you’ve just updated your schemas!

Schema Update Protocol Complete

WHO INVITED A WIZARD TO THE PARTY? - RECONFIGURING THE BI DOMAIN AND UPDATING THE BI CONFIGURATION

Like I said before, I won’t be covering every single step of this process i.e, doing the map viewer portion, which means you’ll have to still consult the…oracle, on some of this stuff. That being said, don’t gloss over backing up the map viewer file..you have to do it. This is simply an attempt to help make the upgrade process a little easier to swallow and hopefully make some of the more confusing steps a bit clearer. Moving on. Guess what? It’s time to run another series of dialogue boxes. Beats the heck out of scripting this stuff though, I guess. Open up the RECONFIGURATION WIZARD!!!!! as instructed in the documentation, from the location within your NEW oracle home. The first step will prompt us for the location of the domain we want to upgrade. We want to upgrade our existing 12c domain (the old one). So type that in/browse for it. Right now.

Enter Existing Domain Home

Validate your java version and location in step 3 and then click your way through the next few screens, ensuring that you’ve at least given your stamp of approval on any pre-filled or manually filled entries in each dialogue box. Leave step 7 alone and click Next to get to the screen where we’re actually going to be starting the reconfiguration process. Click through and exit the RECONFIGURATION WIZARD!!!!!

Validate Java

Configuration Celebration

Don’t forget to restore the map viewer config file at this point, and then launch the configuration assistant again, this time selecting the All Configurations Used By a Domain option in step 2. Make sure you’ve entered the location of the existing 12c domain in this step as well, and NOT the one created under the new oracle home.

Enter Proper Domain

Click through the next steps, again, paying close attention to all prompts and the location for the map viewer xml file. Verify in step 7 that the directory locations referenced for both domain and oracle map viewer are for the existing locations and NOT those created by the install of the update.

Correct Location Verification Affirmation

WRAPPING UP AND NOTES

You can now boot up ssl (as below) and then start OBIEE services.

DOMAIN_HOME/bitools/bin/ssl.sh internalssl true

Note: if you have tnsadmin or ldap.ora, place copies under NEW_ORACLE_HOME/network/admin

You can ignore the new oracle home created at this time, as, in my opinion, we’re going to have to do something similar for any following updates
for 12c. What did you think of the upgrade process and did you run into any issues? Thanks so much for reading, and as always, if you find any inconsistencies or errors please let us hear about them!

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator