Feed aggregator

Persistent entries in controlfile

Tom Kyte - Thu, 2016-09-29 07:06
My question is about the records kept in the controlfile. Here is a general backgroud for this question: I have a Primary database and a physical standby database both in 12C version. The redo log files on the primary database were so undersized (5...
Categories: DBA Blogs

Oracle TO_SINGLE_BYTE Function with Examples

Complete IT Professional - Thu, 2016-09-29 06:00
The Oracle TO_SINGLE_BYTE function is useful for databases with different character sets. Learn how to use it and see some examples in this article. Purpose of the Oracle TO_SINGLE_BYTE Function The purpose of the TO_SINGLE_BYTE function is to convert a string with multi-byte characters into single-byte characters. To use this function, your database character set needs […]
Categories: Development

What is overloading and how and when do I use it

Bar Solutions - Thu, 2016-09-29 04:29

Dear Patrick,

Recently I heard someone talk about overloading in Java. What is it, is it possible in PL/SQL and if so, how would I use it?

Ramesh Cumar

Dear Ramesh,

Overloading is a technique of creating multiple programs with the same name that can be called with different sets of parameters. It is definitely possible to apply this technique in PL/SQL, in fact, Oracle does this a lot of times in their own built-in packages. If you take a look at the SYS.STANDARD package then you will find a lot of functions called TO_CHAR, but with different parameter sets. You probably never wondered how Oracle can use the same function name for completely different tasks. It’s just as easy to write
TO_CHAR(9) which will result in ‘9’ as it is to write TO_CHAR(SYSDATE) which will result in the current date in to format specified in the NLS_DATE_FORMAT parameter, for example 29-12-15 if the format is ‘DD-MM-RR’. If you would want to get this value in a different format you can just write TO_CHAR (SYSDATE, ‘Month, DDth YYYY’) to get ‘December, 29th 2015’. As you can see they are all calls to a function with the same name, but with completely different sets of parameters.

This behavior cannot be realized by making all the parameters option, like this:

FUNCTION TO_CHAR (num_in in number default null
, date_in in date default null
, datemask_in in varchar2) return varchar2;

Because if you would want to call this function without using named parameters this call
TO_CHAR (SYSDATE) would not work, since SYSDATE returns a DATE and the function expects a number as its first parameter. Maybe it might work, because of the implicit typecasts, but you get the idea.
The way this is implemented is there are multiple functions defined in a package with the same name but different sets of parameters.

One of the packages you can take a look at, because its implementation is readable, i.e. not wrapped, is the HTP package which you can use to generate HTML output for instance in an APEX application. If you take a look at for instance the PRINT procedure. In the package specification you can see there are three implementations available for this procedure:

procedure print (cbuf in varchar2 character set any_cs DEFAULT NULL);
procedure print (dbuf in date);
procedure print (nbuf in number);

The parameters of these function differ not only in name, but also in data type, which is a requirement for the use of overloading:
Data type and/or number and/or name of parameters must differ
The compiler will not complain if you don’t completely comply with this rule, but at runtime you will not be able to use either one of them.

Consider the following package with its implementation

[PATRICK]SQL>CREATE OR REPLACE PACKAGE ol IS
              PROCEDURE p (param_in IN VARCHAR2);
               PROCEDURE p (param_in IN CHAR);
             END ol;
             /
[PATRICK]SQL>CREATE OR REPLACE PACKAGE BODY ol IS
               PROCEDURE p (param_in IN VARCHAR2)
               IS
               BEGIN
                 dbms_output.put_line(param_in);
               END p;
               PROCEDURE p (param_in IN CHAR)
               IS
               BEGIN
                 dbms_output.put_line(param_in);
               END p;
             END ol;
             /

If you want to call the procedure there is no way Oracle can decide which one to use.

[PATRICK]SQL>BEGIN
               ol.p('Hello World');
             END;
             /
  ol.p('Hello World');
  *
ERROR at line 2:
ORA-06550: Line 2, column 3:
PLS-00307: too many declarations of 'P' match this call.
ORA-06550: Line 2, column 3:
PL/SQL: Statement ignored.

Even if you were using named parameters you would get the same error. What we have here is so called ‘ambiguous overloading’. You can read more about this subject at http://www.stevenfeuerstein.com/learn/building-code-analysis.
So, there is definitely a use for overloading but you have to be careful about the parameters, especially when parameters have default values. If you run into a situation of ambiguous overloading you now know why the compiler didn’t complain, but the runtime engine does.

Happy Oracle’ing,
Patrick Barel

If you have any comments on this subject or you have a question you want answered, please send an email to patrick[at]bar-solutions[dot]com. If I know the answer, or can find it for you, maybe I can help.

This question has been published in OTech Magazine of Summer 2015.

IOT limitation

Jonathan Lewis - Thu, 2016-09-29 04:17

In the right circumstances Index Organized Tables (IOTs) give us tremendous benefits – provided you use them in the ideal fashion. Like so many features in Oracle, though, you often have to compromise between the benefit you really need and the cost of the side effect that a feature produces.

The fundamental design targets for an IOT are that you have short rows and only want to access them through index range scans of primary key. The basic price you pay for optimised access is the extra work you have to do as you insert the data. Anything you do outside the two specific targets is likely to lead to increased costs of using the IOT – and there’s one particular threat that I’ve mentioned twice in the past (here and here). I want to mention it one more time with a focus on client code and reporting.


create table iot1 (
        id1     number(7.0),
        id2     number(7.0),
        v1      varchar2(10),
        v2      varchar2(10),
        padding varchar2(500),
        constraint iot1_pk primary key(id1, id2)
)
organization index
including id2
overflow
;

insert into iot1
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        mod(rownum,311)                 id1,
        mod(rownum,337)                 id2,
        to_char(mod(rownum,20))         v1,
        to_char(trunc(rownum/100))      v2,
        rpad('x',500,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 ; commit; begin dbms_stats.gather_table_stats( ownname => user,
                tabname          => 'IOT1'
                method_opt       => 'for all columns size 1'
        );
end;
/

alter system flush buffer_cache;

select table_name, blocks from user_tables where table_name = 'IOT1' or table_name like 'SYS_IOT_OVER%';
select index_name, leaf_blocks from user_indexes where table_name = 'IOT1';

set autotrace traceonly
select max(v2) from iot1;
set autotrace off

I’ve created an index organized table with an overflow. The table definition places all columns after the id2 column into the overflow segment. After collecting stats I’ve then queried the table with a query that, for a heap table, would produce a tablescan as the execution plan. But there is no “table”, there is only an index for an IOT. Here’s the output I get (results from 11g and 12c are very similar):

TABLE_NAME               BLOCKS
-------------------- ----------
SYS_IOT_OVER_151543        8074
IOT1

INDEX_NAME           LEAF_BLOCKS
-------------------- -----------
IOT1_PK                      504

---------------------------------------------------------------------------------
| Id  | Operation             | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |         |     1 |     4 | 99793   (1)| 00:00:04 |
|   1 |  SORT AGGREGATE       |         |     1 |     4 |            |          |
|   2 |   INDEX FAST FULL SCAN| IOT1_PK |   100K|   390K| 99793   (1)| 00:00:04 |
---------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
     100376  consistent gets
       8052  physical reads

The index segment has 504 leaf blocks, the overflow segment has 8,074 used blocks below the high water mark. The plan claims an index fast full scan of the index segment – but the physical reads statistic looks more like a “table” scan of the overflow segment. What’s actually happening ?

The 100,000+ consistent reads should tell you what’s happening – we really are doing an index fast full scan on the index segment, and for each index entry we go to the overflow segment to find the v2 value. Oracle doesn’t have a mechanism for doing a “tablescan” of just the overflow segment – even though the definition of the IOT ought (apparently) to be telling Oracle exactly which columns are in the overflow.

In my particular test Oracle reported a significant number of “db file scattered read” waits against the overflow segment, but these were for “prefetch warmup”; in a normal system with a buffer cache full of other data this wouldn’t have happened. The other interesting statistic that showed up was “table fetch continued row” – which was (close to) 100,000, again highlighting that we weren’t doing a normal full tablescan.

In terms of normal query processing this anomaly of attempted “tablescans” being index driven probably isn’t an issue but, as I pointed out in one of earlier posts on the topic, when Oracle gathers stats on the “table” it will do a “full tablescan”. If you have a very large table with an overflow segment it could be a very slow process – especially if you’ve engineered the IOT for the right reason, viz: the data arrives in the wrong order relative to the order you want to query it, and you’ve kept the rows in the IOT_TOP short by dumping the rarely used data in the overflow. With this in mind you might want to make sure that you write a bit of special code that gathers stats only on the columns you know to be in the IOT_TOP, creates representative numbers for the other columns, then locks the stats until the next time you want to refresh them.

 


Corente on VirtualBox revisited

Pat Shuff - Wed, 2016-09-28 16:20
Last week we started talking about setting up Corente and came to the conclusion that you can not run Corente Gateway in a VirtualBox. It turns out that not only was I wrong but I got a ton of mail from people like product managers, people who got it working, and people who generally did not agree with my conclusion. Ok, I will admit that I read the manuals, played with the suggested configurations, and tried deploying it on my own. It appears that I did a few things backwards and cornered myself into an area that caused things not to work. Today we are going to walk through the steps needed to get Corente up and running in your data center using VirtualBox as a sandbox.

The first thing that you absolutely need is a Corente admin account. Without this you will not be able to create a configuration to download and everything will fail. You should have received an account email from "no-reply-cloud@oracle.com" with the title "A VPN account was created for you". If you have multiple accounts you should have received multiple emails. This is a good thing if you got multiples. It is a bad thing if you did not get any. I received mine back on August 11th of this year. I received similar emails back on April 27th for some paid accounts that I have had for a while. The email reads

The VPN account information included in this email enables you to sign in to App Net Manager Service Portal when setting up Corente Services Gateway (cloud gateway) on Oracle Cloud, which is Step 2 of the setup process.
Account Details
	
Username:  a59878_admin
Password: --not shown--
Corente Domain:  a59878
Click here for additional details about how to access your account. The link takes you to the documentation on how to setup a service gateway. The document was last updated in August and goes through the workflow on how to setup a connection.

Step 1: Obtain a trial or paid subscription to Oracle Compute Cloud Service. After you subscribe to Oracle Compute Cloud Service, you will get your Corente credentials through email after you receive the Oracle Compute Cloud Service welcome email.

Step 2: Set up a Corente Services Gateway (on-premises gateway) in your data center. This is where everything went off the rails the first time. This actually is not step 2. Step 2 is to visit the App Net Manager and register your gateway using the credentials that you received in the email. I went down the foolish path of spinning up a Linux 6 instance and running the verification to make sure that the virtualization gets passed to the guest operating system. According to the documentation, this is step 2. VirtualBox fails all of the tests suggested. I then looked for a second way of running in VirtualBox and the old way of doing this is being dropped from support. According to the product manager, support is being dropped because it does work in VirtualBox and if you follow the cookbooks that are available internal to Oracle you can make it work properly. I found two cookbooks and both are too large to publish in this blog. I will try to summarize the key steps. Ask your local sales consultant to look for "Oracle Corente Cloud Services Cook Book" or "Oracle Cloud Platform - Corente VPN for PaaS and IaaS". Both walk you through installation with screen shots and recommended configurations.

Step 2a: Go to www.corente.com/web and execute the Java code that launches the App Net Manager. When I first did this it failed. I had to download a newer version of Java to get the javaws image to install. If you are on a Linux desktop you can do this with a w get http://javadl.oracle.com/webapps/download/AutoDL?BundleId=211989 or go to the web page https://java.com/en/download/linux_manual.jsp and download the Linux64 bundle. This allows you to uncompress and install the javaws binary and associate it with the jsp file provided on the Corente site. If you are on Windows or MacOS, go to https://java.com/en/download/ and it will figure out what your desktop is and ask you to download and install the latest version of Java. What you are looking for is a version with a JDK containing the javaws binary. This binary is called from the web browser and executes the downloadable scripts from the Corente site.

Step 2b: When you go to the www.corente.com/web site it will download java code and launch the App Manager. It should look like

The first time there will be no locations listed. We will need to add a location. It is important to note that the physical address that you use for the location has no relevance to the actual address of your server, gateway, or cloud hosting service. I have been cycling through major league baseball park addresses as my location. My gateway is currently located at Minute Maid Park in Houston and my desktop is at the Texas Rangers Ballpark in Arlington with my server at Wrigley Field in Chicago.

Step 2c: Launch the New Location Wizard. The information that will be needed is Name, address, maintenance window (date and reboot option), inline configuration, dhcp, dhcp client name is optional, and lan interface. Note that it is important to know ahead of time what your lan interface is going to be. Once you get your gateway configured and connected the only way to get back into this console is to do it from this network. When I first did this I did not write down the ip address and basically locked my account. I had to go to another account domain and retry the configuration. For the trial that I did I used 192.168.200.1 as the lan address and had it use 255.255.255.0 as the netmask. This will become your gateway for all subnets in your data center. By default there is a dhcp server in my house that assigns IP addresses to the 192.168.1.X network. You need to pick something different than this subnet because you can't have a broadband router acting as a gateway to the internet and a VPN router acting as a gateway router on the same subnet. The implication to this is that you will need to create a new network interface on your Oracle Compute Cloud instances that have a network connection that talk on the 192.168.200.X network. This is easy to do but selection of this network is important and writing it down is even more important. The wizard will continue and ask about adding the subnet to the Default User Group. Click Yes and add the 192.168.200.X subnet to this group.

Step 2d: At this point we are ready to install a Linux 6 or Linux 7 guest OS in VirtualBox and download the Corente Services Gateway software from http://www.oracle.com/technetwork/topics/cloud/downloads/network-cloud-service-2952583.html. From here you agree to the legal stuff and download the Corente Gateway Image. This is a bootable image that works with VirtualBox.

Step 2e: We need to configure the instance with 2G of RAM, at least 44G of disk, and two network interfaces. The first interface needs to be configured as active using the Bridged Adapter. The second interface needs to be configured as active using the Internal Network. The bridged adapter is going to get assigned to the 192.168.1.X network by our home broadband DHCP server. The second network is going to be statically mapped to 192.168.200.1 by the configuration that you download from the App Manager. You also need to mount the iso image that was downloaded for the Corente Gateway Image. When the server boots it will load the operating system into the virtual disk and ask to reboot once the OS is loaded.

Step 3: Rather than rebooting the instance we should stop the reboot after shutdown happens and remove the iso as the default boot device. If we don't, we will go through the OS install again and it will keep looping until we do. Once we boot the OS it will ask us to download the configuration file from the App Manager. We do this by setting the download site to www.corente.com, selecting dhcp as the network configuration and entering our login information for the App Manager in the next screen.

Step 4: At this point we have a gateway configured in our data center (or home in my case) and need to setup a desktop server to connect through the VPN and access the App Manager. Up to this point we have connected to the app manager via our desktop to setup the initial configuration. From this point forward we will need to do so from an ip address in the 192.168.200.x network. If you try to connect to the app manager from your desktop you will get an error message and nothing can be done. To install a guest system we boot Linux 6 or Linux 7 into VirtualBox and connect to https://66.77.134.249. To do this we need to setup the network interfaces on our guest operating system. The network needs to be the internal network. For my example I used 192.168.200.100 as the guest OS ip address and the default router is 192.168.200.1 which is our gateway server. This machine is configured with a static IP address because by default the 192.168.1.X server will answer the DHCP address and assign you to the wrong subnet. To get the App Manager to work I had to download the javaws again for Linux and associate the jsp file from the www.corente.com/web site to launch using javaws. Once this was done I was able to add the guest OS as a new location.

At this point we have a gateway server configured and running and a computer inside our private subnet that can access the App Manager. This is the foundation to getting everything to work. From here you can then provision a gateway instance in the cloud service and connect your guest OS to computers in the cloud as if they were in the same data center. More on that later.

In summary, this was more difficult to do than I was hoping for. I made a few key mistakes when configuring the service. The first was not recording the IP address when I setup everything the first time. The second was using the default network behind my broadband router and not a different network address. The third was assuming that the steps presented in the documentation were the steps that I had to follow. The fourth was not knowing that I had to setup a guest OS to access the App Manager once I had the gateway configured. Each of these mistakes took hours to overcome. Each configuration and failure required starting over again from scratch and once I got to a point in the install I could not go back to scratch but had to start over with another account to get back to scratch. I am still trying to figure out how to reset the configuration for my initial account. Hopefully my slings and arrows will help you avoid the pitfalls of outrageous installations.

Oracle Cloud UX Exchange: The Rapid Development Kit Experience

Usable Apps - Wed, 2016-09-28 16:17
0 0 1 71 408 Oracle America, Inc. 3 1 478 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

I was thrilled to attend Oracle OpenWorld 2016 to support the Oracle Applications User Experience (OAUX) Cloud UX Rapid Development Kit (RDK) station at the OAUX Cloud Exchange with my colleagues Tim Dubois (@Timdubis), Scott Robinson (@scottrobinson), and Lancy Silveira (@LancyS).

Tweet by Misha Vaughan

From left-to-right: Holly Roland, Scott Robinson, and Tim Dubois (photo: Misha Vaughan)

Over three days, we had the great privilege of meeting many partners and customers who stopped by our station to learn more about our RDKs, ask questions, and share their real-world use cases.

I observed some themes in the questions that were asked, so today I'm offering answers to a few of the most frequently asked questions in this post for anyone who wasn’t able to stop by our station and might have these questions, too.

0 0 1 150 861 Oracle America, Inc. 7 2 1009 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

 Lancy Silveira, Luc Bors, Timo Hahn, Tim Dubois

From left-to-right: Lancy Silveira (OAUX), Luc Bors (eProseed), Timo Hahn (virtual7 GmbH), and Tim Dubois (OAUX) explore RDK possibilities at the OAUX Exchange at OpenWorld. Luc Bors also wrote one of the forewords to our Mobile Cloud UX Design Patterns eBook. (photo: Karen Scipi)

What’s a Rapid Development Kit (RDK)?

An RDK is a complete, standalone, integrated user interface (UI) accelerator kit created by the OAUX team with input from the Oracle PartnerNetwork. It is built on Oracle technologies and based on proven user experience design and development.

Partners can use an RDK to design and build consistent SaaS and PaaS user experiences for simplified user interfaces and mobile user experiences deployed to Oracle Cloud Services.

We offer three RDKs. Everything in each RDKs is reusable. Our RDKs include:

0 0 1 38 218 Oracle America, Inc. 1 1 255 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

  • Example SaaS flows and PaaS services integrations
  • Coded samples, components, and templates with Oracle Alta UI CSS and images
  • UX design pattern eBook, technical eBook, wireframing templates, and more
  • Guidance on how to use a use case to win business with

0 0 1 167 956 Oracle America, Inc. 7 2 1121 14.0 Normal 0 false false false EN-US JA X-NONE -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

Design patterns eBook, coded sample, technical eBook, wireframe template

Offered in all of our Oracle UX RDKs: Design patterns eBook, coded sample, technical eBook, wireframe template

What RDKs Are There, and When Will They Be Available?

OAUX is delighted to offer three free RDKs for your technology and development needs.

Oracle Cloud UX RDK

The Oracle Cloud UX RDK is for those who design and build SaaS simplified UIs and extensions using Oracle Application Development Framework (Oracle ADF) and deploy apps using Oracle Java Cloud Service and/or Oracle Java Cloud Service-SaaS Extensions (JCS-SX).

The Oracle Cloud UX RDK is available now. Check out our Usable Apps page for information about downloading this RDK and for information about getting started.

For a quick tour of our Usable Apps page, watch our 15-minute webinar. A Customer Connect Community account is required. If you don’t have an account, take a moment to register for one.

 Simplified home experience page

Oracle Cloud UX RDK: Simplified home experience

Oracle JET UX RDK

The Oracle JET UX RDK is for those who design and build simplified UIs using Oracle JavaScript Extension Toolkit (JET). This RDK supports any JavaScript-suitable IDE or editor and supports deploying PaaS apps to a cloud server.

The Oracle JET UX RDK will be available soon. Watch this blog and our other channels for announcements and more information when this RDK becomes available.

0 0 1 74 425 Oracle America, Inc. 3 1 498 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

Oracle Cloud UX JET simplified home experience page

Oracle Cloud JET UX RDK: Simplified home experience

Oracle MAF UX RDK

The Oracle MAF UX RDK is for those who design and build mobile apps using Oracle Mobile Application Framework (Oracle MAF). This RDK supports popular devices and native device features. Apps built using this RDK can also be integrated with Oracle Mobile Cloud Service.

The Oracle MAF UX RDK will be available soon. Watch this blog and our other channels for announcements and more information when this RDK becomes available.

0 0 1 32 184 Oracle America, Inc. 1 1 215 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

 Simplified home experience pages

Oracle MAF UX RDK: Simplified home experience

Our mobile design patterns eBook is available now in EPUB and PDF formats. Download your free copy now.

0 0 1 116 664 Oracle America, Inc. 5 1 779 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

 Oracle Mobile Applications Cloud User Experience Design Patterns

eBook: Oracle Mobile Applications Cloud User Experience Design Patterns

How Can an RDK Help Me?

Using an RDK helps partners and customers rapidly—in hours—design, build, adapt, and deploy SaaS and PaaS simplified and mobile UIs.

We offer different RDKs so that you can choose the best RDK toolkit for your requirements.

RDKs are major differentiators for partners who are looking to increase business through Oracle Cloud adoption. Because an RDK helps partners produce consistent UX results, an RDK offers customers confidence in the Oracle Cloud.

Using an RDK also helps boost partner and developer productivity. Each RDK includes resources, such as coded samples, flows, design patterns, and wireframing templates that help simplify design, iteration, and coding work.

0 0 1 120 685 Oracle America, Inc. 5 1 804 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Design Patterns and Wireframing Templates

Creating reusable interaction design solutions for common use cases that can be adapted and applied across applications to deliver modern, compelling, consistent user experiences is easy with our RDKs. Partners can use design patterns and wireframing templates delivered in our RDKs:

0 0 1 71 406 Oracle America, Inc. 3 1 476 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

  • Before a single line of code is written. They can be used during the innovation cycle to help expose problems early, increase productivity of application builders, and eliminate costly surprises late in the build cycle.
  • After code is written. They can be used to extend Oracle Applications Simplified User Interfaces and Oracle Mobile Applications by building modern, compelling customer solutions that look and behave like Oracle user experiences for Oracle Cloud Services.

0 0 1 7 41 Oracle America, Inc. 1 1 47 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

Oracle Mobile UX RDK wireframe template

Wireframe template in the Oracle Mobile UX RDK

Sample design patterns

0 0 1 12 73 Oracle America, Inc. 1 1 84 14.0 Normal 0 false false false EN-US JA X-NONE -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

Sample design patterns: Oracle Cloud UX RDK (top) and Oracle Mobile UX RDK (bottom)

0 0 1 271 1550 Oracle America, Inc. 12 3 1818 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Success Stories

We were joined by partners at the OAUX Cloud Exchange who shared their experiences of how using the Cloud UX RDK has enabled their businesses.

Read their stories:

 Simplicity, mobility, extensibility 0 0 1 111 638 Oracle America, Inc. 5 1 748 14.0 Normal 0 false false false EN-US JA X-NONE -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

Our Channels

For the latest news and updates on our Rapid Development Kits and all things partner-enablement, watch this blog space and follow us on these channels:

0 0 1 79 451 Oracle America, Inc. 3 1 529 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

Also, if you are an APAC partner, check out the October event. Other regional events will be announced.

Database Design

Tom Kyte - Wed, 2016-09-28 12:46
Hi, Consider the following table Order_number Order_date Cust_id SalesPersion_id Amount Region 10 8/3/2010 5 2 500 US 20 18/7/2011 7 1 900 INDIA 30 12/3/2011...
Categories: DBA Blogs

What is default date in Oracle?

Tom Kyte - Wed, 2016-09-28 12:46
Hi, My question is in SQL SERVER if I write below query I will get date difference in months.... select datediff(mm,0,getdate()) : Result : 1400 months select datediff(mm,'1900-01-01','2016-09-28') : Result : 1400 months Here 0 is '190...
Categories: DBA Blogs

How To FULL DB EXPORT/IMPORT

Tom Kyte - Wed, 2016-09-28 12:46
Hello Tom, How to do a Full DB exp and Import. I do say exp system/manager@xyz FULL=Y FILE=FULL.DMP Then If I want to do a full Import to a new freshly created DB which only has the default schemas sys , system , etc. 1) Please...
Categories: DBA Blogs

Counting my Zero Days

FeuerThoughts - Wed, 2016-09-28 11:38
I have decided to start keeping track of how many Zeroes I am able to accumulate in a day.

My "Zero Day" is not the same as the hacker zero day concept.

Instead, my Zero Day has to do with Reduce, Reuse, Recycle.

There's a lot of talk and action about recycling. Much less on the reduce and reuse side, which is understandable but lamentable.

Understandable: recycle is post-consumptino, reduce and reuse and pre-consumption. The more we reduce consumption, the less people consume = buy, and human economies are structured entirely around perpetual growth.

So corporations are all fine with promoting recycling, not so much reduction.

But I am convinced, and feel it is quite obvious, that the only way out of the terrible mess we are making of our world is for each of us, individually, to reduce our consumption as much as possible.

And you can't reduce lower than zero consumption. So I am going to see how well I can do at achieving some zeroes each day in my life. 

Here's what I am going to track on my Twitter account:
  • Zero use of my car
  • Zero consumption of plastic (plastic bag for groceries, for example)
  • Zero purchasing of processed food
  • Zero purchasing of anything
  • Zero seconds spent watching television
  • Zero drinking of water from plastic bottle (thanks, Rob!)
I am sure I will think of more - and will add to the above list as I do. Do you have other consumptions?

P.S. I am also trying really, really hard to only eat when I am hungry. So far I have lost 5 pounds in the last week. I hope that trend doesn't continue. :-)


Categories: Development

ORDS 3.0.7 more secure by default

Kris Rice - Wed, 2016-09-28 10:50
Defaulting  PL/SQL Gateway Security Oracle REST Data Services 3.0.7 went out yesterday.  There's an important change that went in to better secure installations by default.  It has always been the case that we recommend customers set the validations for the plsql gateway.  There has always been a validation configuration option to lock down what procedures are accessible which was outlined in

Toyota Selects Oracle Cloud to Analyze Ha:mo Sharing Service in Verification Project

Oracle Press Releases - Wed, 2016-09-28 10:00
Press Release
Toyota Selects Oracle Cloud to Analyze Ha:mo Sharing Service in Verification Project Using Oracle’s cloud-based data visualization service to analyze usage trends of Ha:mo RIDE sharing service

Redwood Shores, Calif.—Sep 28, 2016

Oracle announces today that Toyota Motor Corporation selected Oracle Cloud to analyze the usage trend of Ha:mo RIDE, an ultra-small mobility sharing service that it deploys in the Ha:mo low-carbon transportation system verification project.

Ha:mo is a transportation system that connects personal transportation modes and public transportation for seamless and enjoyable local mobility. The verification project was launched in Toyota City in October 2012. Using a vehicle management system connecting users, cars and parking stations, Toyota currently offers the Ha:mo RIDE sharing service with Toyota Auto Body’s COMS.

Toyota needs to analyze usage trends to verify the effectiveness of a sharing service that responds to diverse needs, such as direct transportation to daily destinations including the office, school, and commercial facilities, connection with public transportation, and touring around sightseeing spots. Toyota decided to use Oracle Data Visualization Cloud Service in order to verify the data analysis and visualization.

Oracle Data Visualization Cloud Service helps organizations analyze big data in enterprise systems in a few clicks, efficiently and quickly identify and share hidden patterns from scattered data, and obtain actionable business insights. Because all of these can be done without resources of the IT division, it can reduce time to obtain business value and accelerate implementation of measures based on analysis results.

For analyzing the usage trend of Ha:mo RIDE, Toyota has recognized Oracle Data Visualization Cloud Service for the following reasons:

  • It combines diverse structured data, shows them in a highly visualized manner in just a few clicks, and creates message-based scenarios from layouts with enhanced infographics.
  • Because Toyota carries out Ha:mo and other new projects based on the “cloud first” policy, the company recognized quick implementation of Oracle Data Visualization Cloud Service offered on the cloud.
  • It is available for a fixed monthly fee of approximately $179.00 USD (18,000 yen) per user (without tax, minimum 5 users) or higher, and desktop version is also available. Therefore, employees can complete tasks on their personal PCs, which allows them to save time and attain high cost effectiveness.
  • It is expected to be used not only in a single division but for the enterprise, and supports the need of big data and statistical analyses.

“Ha:mo’s mobility support will improve convenience, access and transportation, thus helping people move around and invigorating local communities,” said Makoto Tamura, General Manager, Ha:mo Business Planning Dept. ITS Planning Div., Connected Company, TOYOTA MOTOR CORPORATION. “We use Oracle Data Visualization Cloud Service to analyze usage trends to advance the Ha:mo next-generation transportation system and verify its effectiveness as a sharing service to meet all kinds of needs.”

 
Contact Info
Scott Thornburg
Oracle
+1.415.816.8844
scott.thornburg@oracle.com
Norihito Yachita
Oracle Japan
+81.3.6834.4835
norihito.yachita@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Scott Thornburg

  • +1.415.816.8844

Norihito Yachita

  • +81.3.6834.4835

Running PostgreSQL on ZFS on Linux

Yann Neuhaus - Wed, 2016-09-28 09:43

ZFS for Solaris is around for several years now (since 2015). But there is also a project called OpenZFS which makes ZFS available on other operating systems. For Linux the announcement for ZFS being production ready was back in 2013. So why not run PostgreSQL on it? ZFS provides many cool features including compression, snapshots and build in volume management. Lets give it a try and do an initial setup. More details will follow in separate posts.

As usual I am running a CentOS 7 VM for my tests:

[root@centos7 ~] lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.2.1511 (Core) 
Release:	7.2.1511
Codename:	Core

There is a dedicated website for ZFS on Linux where you can find the instructions on how to install it for various distributions. The instruction for CentOS/RHEL are quite easy. Download the repo files:

[root@centos7 ~] yum install http://download.zfsonlinux.org/epel/zfs-release$(rpm -E %dist).noarch.rpm
Loaded plugins: fastestmirror
zfs-release.el7.centos.noarch.rpm                                                                    | 5.0 kB  00:00:00     
Examining /var/tmp/yum-root-Uv79vc/zfs-release.el7.centos.noarch.rpm: zfs-release-1-3.el7.centos.noarch
Marking /var/tmp/yum-root-Uv79vc/zfs-release.el7.centos.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package zfs-release.noarch 0:1-3.el7.centos will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================
 Package                  Arch                Version                     Repository                                   Size
============================================================================================================================
Installing:
 zfs-release              noarch              1-3.el7.centos              /zfs-release.el7.centos.noarch              2.9 k

Transaction Summary
============================================================================================================================
Install  1 Package

Total size: 2.9 k
Installed size: 2.9 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : zfs-release-1-3.el7.centos.noarch                                                                        1/1 
  Verifying  : zfs-release-1-3.el7.centos.noarch                                                                        1/1 

Installed:
  zfs-release.noarch 0:1-3.el7.centos                                                                                       

Complete!

[root@centos7 ~] gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
gpg: new configuration file `/root/.gnupg/gpg.conf' created
gpg: WARNING: options in `/root/.gnupg/gpg.conf' are not yet active during this run
pub  2048R/F14AB620 2013-03-21 ZFS on Linux 
      Key fingerprint = C93A FFFD 9F3F 7B03 C310  CEB6 A9D5 A1C0 F14A B620
sub  2048R/99685629 2013-03-21

For the next step it depends if you want to go with DKMS or kABI-tracking kmod. I’ll go with kABI-tracking kmod and therefore will disable the DKMS repository and enable the kmod repository:

[root@centos7 ~] cat /etc/yum.repos.d/zfs.repo 
[zfs]
name=ZFS on Linux for EL7 - dkms
baseurl=http://download.zfsonlinux.org/epel/7/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-kmod]
name=ZFS on Linux for EL7 - kmod
baseurl=http://download.zfsonlinux.org/epel/7/kmod/$basearch/
enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-source]
name=ZFS on Linux for EL7 - Source
baseurl=http://download.zfsonlinux.org/epel/7/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-testing]
name=ZFS on Linux for EL7 - dkms - Testing
baseurl=http://download.zfsonlinux.org/epel-testing/7/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-testing-kmod]
name=ZFS on Linux for EL7 - kmod - Testing
baseurl=http://download.zfsonlinux.org/epel-testing/7/kmod/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-testing-source]
name=ZFS on Linux for EL7 - Testing Source
baseurl=http://download.zfsonlinux.org/epel-testing/7/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
[root@centos7 ~] 

Installing ZFS from here on is just a matter of using yum:

[root@centos7 ~] yum install zfs
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.spreitzer.ch
 * extras: mirror.spreitzer.ch
 * updates: mirror.de.leaseweb.net
zfs-kmod/x86_64/primary_db                                                                           | 231 kB  00:00:01     
Resolving Dependencies
--> Running transaction check
---> Package zfs.x86_64 0:0.6.5.8-1.el7.centos will be installed
--> Processing Dependency: zfs-kmod = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: spl = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzpool2 = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzfs2 = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libuutil1 = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libnvpair1 = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzpool.so.2()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzfs_core.so.1()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzfs.so.2()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libuutil.so.1()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libnvpair.so.1()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Running transaction check
---> Package kmod-zfs.x86_64 0:0.6.5.8-1.el7.centos will be installed
--> Processing Dependency: spl-kmod for package: kmod-zfs-0.6.5.8-1.el7.centos.x86_64
---> Package libnvpair1.x86_64 0:0.6.5.8-1.el7.centos will be installed
---> Package libuutil1.x86_64 0:0.6.5.8-1.el7.centos will be installed
---> Package libzfs2.x86_64 0:0.6.5.8-1.el7.centos will be installed
---> Package libzpool2.x86_64 0:0.6.5.8-1.el7.centos will be installed
---> Package spl.x86_64 0:0.6.5.8-1.el7.centos will be installed
--> Running transaction check
---> Package kmod-spl.x86_64 0:0.6.5.8-1.el7.centos will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================
 Package                     Arch                    Version                                Repository                 Size
============================================================================================================================
Installing:
 zfs                         x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  334 k
Installing for dependencies:
 kmod-spl                    x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  110 k
 kmod-zfs                    x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  665 k
 libnvpair1                  x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                   35 k
 libuutil1                   x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                   41 k
 libzfs2                     x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  123 k
 libzpool2                   x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  423 k
 spl                         x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                   29 k

Transaction Summary
============================================================================================================================
Install  1 Package (+7 Dependent packages)

Total download size: 1.7 M
Installed size: 5.7 M
Is this ok [y/d/N]: y
Downloading packages:
(1/8): kmod-spl-0.6.5.8-1.el7.centos.x86_64.rpm                                                      | 110 kB  00:00:01     
(2/8): libnvpair1-0.6.5.8-1.el7.centos.x86_64.rpm                                                    |  35 kB  00:00:00     
(3/8): libuutil1-0.6.5.8-1.el7.centos.x86_64.rpm                                                     |  41 kB  00:00:00     
(4/8): kmod-zfs-0.6.5.8-1.el7.centos.x86_64.rpm                                                      | 665 kB  00:00:02     
(5/8): libzfs2-0.6.5.8-1.el7.centos.x86_64.rpm                                                       | 123 kB  00:00:00     
(6/8): libzpool2-0.6.5.8-1.el7.centos.x86_64.rpm                                                     | 423 kB  00:00:00     
(7/8): spl-0.6.5.8-1.el7.centos.x86_64.rpm                                                           |  29 kB  00:00:00     
(8/8): zfs-0.6.5.8-1.el7.centos.x86_64.rpm                                                           | 334 kB  00:00:00     
----------------------------------------------------------------------------------------------------------------------------
Total                                                                                       513 kB/s | 1.7 MB  00:00:03     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libuutil1-0.6.5.8-1.el7.centos.x86_64                                                                    1/8 
  Installing : libnvpair1-0.6.5.8-1.el7.centos.x86_64                                                                   2/8 
  Installing : libzpool2-0.6.5.8-1.el7.centos.x86_64                                                                    3/8 
  Installing : kmod-spl-0.6.5.8-1.el7.centos.x86_64                                                                     4/8 
  Installing : spl-0.6.5.8-1.el7.centos.x86_64                                                                          5/8 
  Installing : libzfs2-0.6.5.8-1.el7.centos.x86_64                                                                      6/8 
  Installing : kmod-zfs-0.6.5.8-1.el7.centos.x86_64                                                                     7/8 
  Installing : zfs-0.6.5.8-1.el7.centos.x86_64                                                                          8/8 
  Verifying  : libnvpair1-0.6.5.8-1.el7.centos.x86_64                                                                   1/8 
  Verifying  : libzfs2-0.6.5.8-1.el7.centos.x86_64                                                                      2/8 
  Verifying  : zfs-0.6.5.8-1.el7.centos.x86_64                                                                          3/8 
  Verifying  : spl-0.6.5.8-1.el7.centos.x86_64                                                                          4/8 
  Verifying  : kmod-zfs-0.6.5.8-1.el7.centos.x86_64                                                                     5/8 
  Verifying  : libzpool2-0.6.5.8-1.el7.centos.x86_64                                                                    6/8 
  Verifying  : libuutil1-0.6.5.8-1.el7.centos.x86_64                                                                    7/8 
  Verifying  : kmod-spl-0.6.5.8-1.el7.centos.x86_64                                                                     8/8 

Installed:
  zfs.x86_64 0:0.6.5.8-1.el7.centos                                                                                         

Dependency Installed:
  kmod-spl.x86_64 0:0.6.5.8-1.el7.centos   kmod-zfs.x86_64 0:0.6.5.8-1.el7.centos  libnvpair1.x86_64 0:0.6.5.8-1.el7.centos 
  libuutil1.x86_64 0:0.6.5.8-1.el7.centos  libzfs2.x86_64 0:0.6.5.8-1.el7.centos   libzpool2.x86_64 0:0.6.5.8-1.el7.centos  
  spl.x86_64 0:0.6.5.8-1.el7.centos       

Complete!
[root@centos7 ~]

Be aware that the kernel modules are not loaded by default, so you have to do this on your own:

[root@centos7 ~] /sbin/modprobe zfs
Last login: Wed Sep 28 11:04:21 2016 from 192.168.22.1
[postgres@centos7 ~]$ lsmod | grep zfs
zfs                  2713912  0 
zunicode              331170  1 zfs
zavl                   15236  1 zfs
zcommon                55411  1 zfs
znvpair                93227  2 zfs,zcommon
spl                    92223  3 zfs,zcommon,znvpair
[root@centos7 ~] zfs list
no datasets available

For loading the modules automatically create a file under /etc/modules-load.d:

[root@centos7 ~] echo "zfs" > /etc/modules-load.d/zfs.conf
[root@centos7 ~] cat /etc/modules-load.d/zfs.conf
zfs

So far so good. Lets create a ZFS file system. I have two disks available for playing with ZFS (sdb and sdc):

[root@centos7 ~] ls -la /dev/sd*
brw-rw----. 1 root disk 8,  0 Sep 28 11:14 /dev/sda
brw-rw----. 1 root disk 8,  1 Sep 28 11:14 /dev/sda1
brw-rw----. 1 root disk 8,  2 Sep 28 11:14 /dev/sda2
brw-rw----. 1 root disk 8, 16 Sep 28 11:14 /dev/sdb
brw-rw----. 1 root disk 8, 32 Sep 28 11:14 /dev/sdc

The first thing you have to do is to create a new zfs pool (I don’t care about the warnings, that is why I use the “-f” option below):

[root@centos7 ~] zpool create pgpool mirror /dev/sdb /dev/sdc
invalid vdev specification
use '-f' to override the following errors:
/dev/sdb does not contain an EFI label but it may contain partition information in the MBR.
/dev/sdc does not contain an EFI label but it may contain partition information in the MBR.
[root@centos7 ~] zpool create pgpool mirror /dev/sdb /dev/sdc -f
[root@centos7 ~] zpool list
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pgpool  9.94G    65K  9.94G         -     0%     0%  1.00x  ONLINE  -
[root@centos7 ~] zpool status pgpool
  pool: pgpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	pgpool      ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0

errors: No known data errors

[root@centos7 ~] df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   49G  1.7G   47G   4% /
devtmpfs                 235M     0  235M   0% /dev
tmpfs                    245M     0  245M   0% /dev/shm
tmpfs                    245M  4.3M  241M   2% /run
tmpfs                    245M     0  245M   0% /sys/fs/cgroup
/dev/sda1                497M  291M  206M  59% /boot
tmpfs                     49M     0   49M   0% /run/user/1000
pgpool                   9.7G     0  9.7G   0% /pgpool

What I did here is to create a mirrored pool over my two disks. The open zfs wiki has some performance tips for running PostgreSQL on ZFS as well as for other topics. Lets go with the recommendations:

[root@centos7 ~] zfs create pgpool/pgdata -o recordsize=8192
[root@centos7 ~] zfs set logbias=throughput pgpool/pgdata
[root@centos7 ~] zfs set primarycache=all pgpool/pgdata
[root@centos7 ~] zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
pgpool           82K  9.63G  19.5K  /pgpool
pgpool/pgdata    19K  9.63G    19K  /pgpool/pgdata

My new ZFS file system is ready and already mounted, cool. Lets change the permissions and list all the properties:

[root@centos7 ~] chown postgres:postgres /pgpool/pgdata
[root@centos7 ~] zfs get all /pgpool/pgdata
NAME           PROPERTY              VALUE                  SOURCE
pgpool/pgdata  type                  filesystem             -
pgpool/pgdata  creation              Wed Sep 28 11:31 2016  -
pgpool/pgdata  used                  19K                    -
pgpool/pgdata  available             9.63G                  -
pgpool/pgdata  referenced            19K                    -
pgpool/pgdata  compressratio         1.00x                  -
pgpool/pgdata  mounted               yes                    -
pgpool/pgdata  quota                 none                   default
pgpool/pgdata  reservation           none                   default
pgpool/pgdata  recordsize            8K                     local
pgpool/pgdata  mountpoint            /pgpool/pgdata         default
pgpool/pgdata  sharenfs              off                    default
pgpool/pgdata  checksum              on                     default
pgpool/pgdata  compression           off                    default
pgpool/pgdata  atime                 on                     default
pgpool/pgdata  devices               on                     default
pgpool/pgdata  exec                  on                     default
pgpool/pgdata  setuid                on                     default
pgpool/pgdata  readonly              off                    default
pgpool/pgdata  zoned                 off                    default
pgpool/pgdata  snapdir               hidden                 default
pgpool/pgdata  aclinherit            restricted             default
pgpool/pgdata  canmount              on                     default
pgpool/pgdata  xattr                 on                     default
pgpool/pgdata  copies                1                      default
pgpool/pgdata  version               5                      -
pgpool/pgdata  utf8only              off                    -
pgpool/pgdata  normalization         none                   -
pgpool/pgdata  casesensitivity       sensitive              -
pgpool/pgdata  vscan                 off                    default
pgpool/pgdata  nbmand                off                    default
pgpool/pgdata  sharesmb              off                    default
pgpool/pgdata  refquota              none                   default
pgpool/pgdata  refreservation        none                   default
pgpool/pgdata  primarycache          all                    default
pgpool/pgdata  secondarycache        all                    default
pgpool/pgdata  usedbysnapshots       0                      -
pgpool/pgdata  usedbydataset         19K                    -
pgpool/pgdata  usedbychildren        0                      -
pgpool/pgdata  usedbyrefreservation  0                      -
pgpool/pgdata  logbias               throughput             local
pgpool/pgdata  dedup                 off                    default
pgpool/pgdata  mlslabel              none                   default
pgpool/pgdata  sync                  standard               default
pgpool/pgdata  refcompressratio      1.00x                  -
pgpool/pgdata  written               19K                    -
pgpool/pgdata  logicalused           9.50K                  -
pgpool/pgdata  logicalreferenced     9.50K                  -
pgpool/pgdata  filesystem_limit      none                   default
pgpool/pgdata  snapshot_limit        none                   default
pgpool/pgdata  filesystem_count      none                   default
pgpool/pgdata  snapshot_count        none                   default
pgpool/pgdata  snapdev               hidden                 default
pgpool/pgdata  acltype               off                    default
pgpool/pgdata  context               none                   default
pgpool/pgdata  fscontext             none                   default
pgpool/pgdata  defcontext            none                   default
pgpool/pgdata  rootcontext           none                   default
pgpool/pgdata  relatime              on                     temporary
pgpool/pgdata  redundant_metadata    all                    default
pgpool/pgdata  overlay               off                    default

Ready to deploy a PostgreSQL instance on it:

postgres@centos7:/home/postgres/ [pg954] initdb -D /pgpool/pgdata/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /pgpool/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /pgpool/pgdata/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /pgpool/pgdata/ -l logfile start

Startup:

postgres@centos7:/home/postgres/ [pg954] mkdir /pgpool/pgdata/pg_log
postgres@centos7:/home/postgres/ [pg954] sed -i 's/logging_collector = off/logging_collector = on/g' /pgpool/pgdata/postgresql.conf
postgres@centos7:/home/postgres/ [pg954] pg_ctl -D /pgpool/pgdata/ start
postgres@centos7:/home/postgres/ [pg954] psql postgres
psql (9.5.4 dbi services build)
Type "help" for help.

postgres=

Ready. Lets reboot and check if the ZFS file system is mounted automatically:

postgres@centos7:/home/postgres/ [pg954] df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   49G  1.8G   47G   4% /
devtmpfs                 235M     0  235M   0% /dev
tmpfs                    245M     0  245M   0% /dev/shm
tmpfs                    245M  4.3M  241M   2% /run
tmpfs                    245M     0  245M   0% /sys/fs/cgroup
/dev/sda1                497M  291M  206M  59% /boot
tmpfs                     49M     0   49M   0% /run/user/1000
postgres@centos7:/home/postgres/ [pg954] lsmod | grep zfs
zfs                  2713912  0 
zunicode              331170  1 zfs
zavl                   15236  1 zfs
zcommon                55411  1 zfs
znvpair                93227  2 zfs,zcommon
spl                    92223  3 zfs,zcommon,znvpair

Gone. The kernel modules are loaded but the file system was not mounted. What to do?

[root@centos7 ~] zpool list
no pools available
[root@centos7 ~] zpool import pgpool
[root@centos7 ~] zpool list
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pgpool  9.94G  39.3M  9.90G         -     0%     0%  1.00x  ONLINE  -
[root@centos7 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   49G  1.8G   47G   4% /
devtmpfs                 235M     0  235M   0% /dev
tmpfs                    245M     0  245M   0% /dev/shm
tmpfs                    245M  4.3M  241M   2% /run
tmpfs                    245M     0  245M   0% /sys/fs/cgroup
/dev/sda1                497M  291M  206M  59% /boot
tmpfs                     49M     0   49M   0% /run/user/1000
pgpool                   9.6G     0  9.6G   0% /pgpool
pgpool/pgdata            9.7G   39M  9.6G   1% /pgpool/pgdata

Ok, how to auto mount?

[root@centos7 ~] systemctl enable zfs-mount
[root@centos7 ~] systemctl enable zfs-import-cache
[root@centos7 ~] reboot

I am not sure why this is necessary, should happen automatically.

PS: There is an interesting discussion about PostgreSQL on ZFS on the PostgreSQL performance mailing list currently.

 

Cet article Running PostgreSQL on ZFS on Linux est apparu en premier sur Blog dbi services.

Oracle Report Shows Smart Devices Fueling Rise in LTE Network Traffic

Oracle Press Releases - Wed, 2016-09-28 08:00
Press Release
Oracle Report Shows Smart Devices Fueling Rise in LTE Network Traffic New Oracle Index Provides Communications Professionals a Road Map to Better Plan For and Manage Global Growth in LTE Diameter Signaling

5G WORLD ASIA, Singapore—Sep 28, 2016

Oracle today announced the “Oracle Communications LTE Diameter Signaling Index, Fifth Edition,” highlighting the explosive growth in LTE Diameter Signaling traffic as a result of advancements in consumer technologies such as streaming video and connected devices. Diameter signaling is a protocol or language critical network functions use to communicate across core LTE networks. The report demonstrates that Diameter signaling shows no sign of slowing and is expected to generate 565 million messages per second (MPS) by 2020. As 5G implementations begin to roll out in years to come, Diameter growth will only accelerate as the signaling technology for 5G as well.

To effectively manage this influx in traffic, it’s critical to understand where it’s coming from and what’s driving it. The report was designed as a tool for communications service providers (CSPs), network engineers and executives to plan for these expected increases in signaling capacity over the next five years.

For example, LTE Broadcast remains one of the fastest-growing generators of Diameter signaling traffic as video becomes more prevalent in our everyday lives. As consumers fill their insatiable need to “multi-task”, or use voice and data at the same time, the enabling technology Voice over LTE (VoLTE) is also expected see a significant uptick.

Likewise, devices that reach beyond the traditional mobile handset, such as sensors used in smart city and connected car initiatives will have a significant impact on Diameter signaling growth, as will signaling associated with policy management required to support more sophisticated data plans and applications.

“As Oracle’s new report clearly indicates, LTE traffic shows no sign of slowing down in the near future due to consumers’ smartphone usage and emerging applications like connected car,” said Greg Collins, Founder and Principal Analyst, Exact Ventures. “In order to control costs and to efficiently and effectively route signaling traffic, CSPs need to continue to invest in a scalable, reliable Diameter signaling infrastructure—or they risk network outages and overprovisioning that could damage their brand image, their customers’ experience, and their profitability.”

“LTE continues to gain momentum, fueled by new ways of transmitting and consuming information—from vine videos to car sensors that detect upcoming road hazards,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “CSPs need to continue to innovate and properly plan for Diameter signaling growth to meet changing industry and consumer demands, and leveraging the cloud is one of the clearest avenues for CSPs to achieve these goals.”

Oracle helps CSPs create a more scalable and reliable Diameter signaling infrastructure with Oracle Communications Diameter Signaling Router and Oracle Communications Policy Management. To review the full report and expected growth rates, visit: http://bit.ly/2d6hcEs

LTE Diameter Signaling Traffic by Region
  • Latin America and the Caribbean continues to show steady growth in Diameter networks. The region will generate 15.6 million MPS by 2020, a CAGR of 59 percent.
  • The Middle East and Africa will reach 19 million MPS of Diameter signaling by 2020, a CAGR of 58 percent.
  • The Asia Pacific region will become the largest generator of Diameter signaling traffic in the world, with more than 56 percent of the world’s LTE connections by 2020. This will represent 385 million MPS and a CAGR of 59 percent.
  • North America leads the world in LTE penetration as service providers move aggressively to sunset 2G and 3G services. It is projected that 59 percent of connections in the United States will be 4G/LTE by 2020. In addition, the region will more than triple MPS to 46 million by 2020, representing a 32.1 percent CAGR.
  • Europe generates 12 percent of the worlds Diameter signaling. Eastern Europe generated 1,387 million MPS in 2015, with an expected growth rate of 88 percent by 2020. In comparison, Western Europe generated 5 million MPS in 2015, growing to 34 million MPS by 2020 for a CAGR of 46 percent.
Usage and Citation

Oracle permits the media, financial and industry analysts, service providers, regulators, and other third parties to cite this research with the following attribution:
Source: “Oracle Communications LTE Diameter Signaling Index, Fifth Edition.”

Contact Info
Katie Barron
Oracle
+1 202.904.1138
katie.barron@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1 202.904.1138

Oracle NCHR Function with Examples

Complete IT Professional - Wed, 2016-09-28 06:00
In this article, I’ll explain what the Oracle NCHR function is and show some examples. Purpose of the Oracle NCHR Function The NCHR function returns a character based on the specified number code in the national character set. It’s very similar to the CHR function, but it uses the national character set.   Syntax The […]
Categories: Development

The DBA Detective- It Takes More Than Tools to Solve Performance Problems

Chris Foot - Wed, 2016-09-28 05:19

Those who excel at tuning understand that the tuning process starts with an understanding of the problem and continues with the administrator collecting statistical information. Information collection begins at a global level and then narrows in scope until the problem is pinpointed. This article provides hints and tips that can be used to determine what architectural component is causing the problem.  

DSTv27 Timezone Patches Available for E-Business Suite 12.1

Steven Chan - Wed, 2016-09-28 02:05
Hourglass iconIf your E-Business Suite Release environment is configured to support Daylight Saving Time (DST) or international time zones, it's important to keep your timezone definition files up-to-date. They were last changed in November 2015 and released as DSTv25.

DSTv27 is now available and certified with Oracle E-Business Suite Release 12.1. The DSTv27 update includes the timezone information from the IANA tzdata 2016f.  It is cumulative: it includes all previous Oracle DST updates. 

Is Your Apps Environment Affected?

When a country or region changes DST rules or their time zone definitions, your Oracle E-Business Suite environment will require patching if:

  • Your Oracle E-Business Suite environment is located in the affected country or region OR
  • Your Oracle E-Business Suite environment is located outside the affected country or region but you conduct business or have customers or suppliers in the affected country or region

The latest DSTv27 timezone definition file is cumulative and includes all DST changes released in earlier time zone definition files. DSTv27 includes changes to the following timezones since the DSTv24 release:

  • Asia/Novosibirsk
  • America/Cayman
  • Asia/Chita
  • Asia/Tehran,
  • Haiti,
  • Palestine,
  • Azerbaijan,
  • Chile
  • America/Caracas
  • Asia/Magadan

What Patches Are Required?

In case you haven't been following our previous time zone or Daylight Saving Time (DST)-related articles, international timezone definitions for E-Business Suite environments are captured in a series of patches for the database and application tier servers in your environment. The actual scope and number of patches that need to be applied depend on whether you've applied previous DST or timezone-related patches. Some sysadmins have remarked to me that it generally takes more time to read the various timezone documents than it takes to apply these patches, but your mileage may vary.

Proactive backports of DST upgrade patches to all Oracle E-Business Suite tiers and platforms are not created and supplied by default. If you need this DST release and an appropriate patch is not currently available, raise a service request through support providing a business case with your version requirements.

The following Note identifies the various components in your E-Business Suite environment that may need DST patches:

Pending Certification 

Our certification of this DST timezone patch with Oracle E-Business Suite 12.2 is currently underway.

Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.   


Categories: APPS Blogs

Links for 2016-09-27 [del.icio.us]

Categories: DBA Blogs

about materialized view log mlog issues

Tom Kyte - Tue, 2016-09-27 18:26
Hi team, I want to get some field values from some tables.If field values of tables changed. it must be captured.don't use trigger? for example: table?A fields:a1(primary key), a2,a3 table: B fields:b1(primary key),b2,b3 If table A field ...
Categories: DBA Blogs

Help with ANSI outer join

Tom Kyte - Tue, 2016-09-27 18:26
I am not getting same records when converting from oracle outer join (+) to ANSI outer join. Could you please take a look and check what I am missing. How do I write ANSI outer join to return same resultset for below example? <code> -- Cre...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator