Feed aggregator

assign sqlplus value to shell script variable

Tom Kyte - Thu, 2017-04-13 11:46
Hi Tom I am trying to get the f_name value to FILE_VAR. My proc is having 2nd and 3rd parameter as OUT parameter. I want to take its "f_name" variable's value to Shell script variable "FILE_VAR" I tried below but its not working. <code> FILE_...
Categories: DBA Blogs

Adaptive execution plans inside Stored Procedure

Tom Kyte - Thu, 2017-04-13 11:46
I have a stored procedure that using a global temp table called ITEMS with delete on commit (session based stats). Inside the stored procedure, sometimes 1 row gets added to ITEMS, sometimes 200,000 rows to that table. ITEMS is then used in a few ot...
Categories: DBA Blogs

Regex for comma separated strong with predefined list of words that are allowed.

Tom Kyte - Thu, 2017-04-13 11:46
Hello, With this regex : not REGEXP_LIKE (categories, '[^(subcat1| subcat2 |subcat3|null; )]', 'i'), i can specified what kind of values i can be used and this are separated with ';'. but i can do that <code>insert into regexText Values(';') ...
Categories: DBA Blogs

Need to call Restful API using Oracle PL SQL

Tom Kyte - Thu, 2017-04-13 11:46
Hi, I am new to hitting Restful APIs from Oracle. I have this huge Xml (> 4000 characters) which I need to post to a remote restful api endpoint. Please let me know how to accomplish this. Below is my sample code that I am playing with right now....
Categories: DBA Blogs

ORA_HASH Value Collision.

Tom Kyte - Thu, 2017-04-13 11:46
Hi Tom, I am attempting to assign unique value to a an expression value which is distinct and derived from the concatenation of multiple fields. Here's the usage: Query1: create table Table2 parallel 16 as select /*+ parallel(a,16)*/ disti...
Categories: DBA Blogs

ORDS Standalone and URI Rewrites

Kris Rice - Thu, 2017-04-13 10:45
My last post How to add an NCSA style Access Log to ORDS Standalone explained what the ORDS standalone is and that is based on Eclipse Jetty.  Jetty offers far more than ORDS exposed in it's standalone.  There's a long list of all the features and configuration options listed in the documentation, http://www.eclipse.org/jetty/documentation/9.2.21.v20170120/ A recent question came up for doing

In-core logical replication will hit PostgreSQL 10

Yann Neuhaus - Thu, 2017-04-13 09:03

Finally in PostgreSQL 10 (expected to be released this September) a long awaited feature will probably appear: In-core logical replication. PostgreSQL supports physical replication since version 9.0 and now the next step happened with the implementation of logical replication. This will be a major help in upgrading PostgreSQL instances from one version to another with no (or almost no) downtime. In addition this can be used to consolidate data from various instances into one instance for reporting purposes or you can use it to distribute only a subset of your data to selected users on other instances. In contrast to physical replication logical replication works on the table level so you can replicate changes in one or more tables, one database are all databases in a PostgreSQL instance which is quite flexible.

In PostgreSQL logical replication is implemented using a publisher and subscriber model. This mean the publisher is the one who will send the data and the subscriber is the one who will receive and apply the changes. A subscriber can be a publisher as well so you can build cascading logical replication. Here is an overview of a possible setup:

pg-logocal-replication-overview

For setting up logical replication when you do not start with an empty database you’ll need to initially load the database where you want to replicate to. How can you do that? I have two PostgreSQL 10 instances (build from the git sources) running on the same host:

Role Port Publisher 6666 Subsriber 6667

Lets assume we have this sample setup on the publisher instance:

drop table if exists t1;
create table t1 ( a int primary key
                , b varchar(100)
                );
with generator as 
 ( select a.*
     from generate_series ( 1, 5000000 ) a
    order by random()
 )
insert into t1 ( a,b ) 
     select a
          , md5(a::varchar)
       from generator;
select * from pg_size_pretty ( pg_relation_size ('t1' ));

On the subscriber instance there is the same table, but empty:

create table t1 ( a int primary key
                , b varchar(100)
                );

Before we start with the initial load lets take a look at the process list:

postgres@pgbox:/home/postgres/ [PUBLISHER] ps -ef | egrep "PUBLISHER|SUBSCRIBER"
postgres 17311     1  0 11:33 pts/0    00:00:00 /u01/app/postgres/product/dev/db_01/bin/postgres -D /u02/pgdata/PUBLISHER
postgres 17313 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: checkpointer process   
postgres 17314 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: writer process   
postgres 17315 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: wal writer process   
postgres 17316 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: autovacuum launcher process   
postgres 17317 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: stats collector process   
postgres 17318 17311  0 11:33 ?        00:00:00 postgres: PUBLISHER: bgworker: logical replication launcher   
postgres 17321     1  0 11:33 pts/1    00:00:00 /u01/app/postgres/product/dev/db_01/bin/postgres -D /u02/pgdata/SUBSCRIBER
postgres 17323 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: checkpointer process   
postgres 17324 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: writer process   
postgres 17325 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: wal writer process   
postgres 17326 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: autovacuum launcher process   
postgres 17327 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: stats collector process   
postgres 17328 17321  0 11:33 ?        00:00:00 postgres: SUBSCRIBER: bgworker: logical replication launcher   

You’ll notice that there is a new background process called “bgworker: logical replication launcher”. We’ll come back to that later.

Time to create our first publication on the publisher with the create publication command:

postgres@pgbox:/u02/pgdata/PUBLISHER/ [PUBLISHER] psql -X postgres
psql (10devel)
Type "help" for help.

postgres=# create publication my_first_publication for table t1;
CREATE PUBLICATION

On the subscriber we need to create a subscription by using the create subscription command:

postgres@pgbox:/u02/pgdata/SUBSCRIBER/ [SUBSCRIBER] psql -X postgres
psql (10devel)
Type "help" for help.

postgres=# create subscription my_first_subscription connection 'host=localhost port=6666 dbname=postgres user=postgres' publication my_first_publication;
ERROR:  could not create replication slot "my_first_subscription": ERROR:  logical decoding requires wal_level >= logical

Ok, good hint. After changing that on both instances:

postgres@pgbox:/home/postgres/ [SUBSCRIBER] psql -X postgres
psql (10devel)
Type "help" for help.

postgres=# create subscription my_first_subscription connection 'host=localhost port=6666 dbname=postgres user=postgres' publication my_first_publication;
CREATE SUBSCRIPTION

If you are not on super fast hardware and check the process list again you’ll see something like this:

postgres 19465 19079 19 11:58 ?        00:00:04 postgres: SUBSCRIBER: bgworker: logical replication worker for subscription 16390 sync 16384  

On the subscriber the “logical replication launcher” background process launched a worker process and syncs the table automatically (this can be avoided by using the “NOCOPY DATA”):

postgres=# show port;
 port 
------
 6667
(1 row)

postgres=# select count(*) from t1;
  count  
---------
 5000000
(1 row)

Wow, that was really easy. You can find more details in the logfile of the subscriber instance:

2017-04-13 11:58:15.099 CEST - 1 - 19087 -  - @ LOG:  starting logical replication worker for subscription "my_first_subscription"
2017-04-13 11:58:15.101 CEST - 1 - 19463 -  - @ LOG:  logical replication apply for subscription my_first_subscription started
2017-04-13 11:58:15.104 CEST - 2 - 19463 -  - @ LOG:  starting logical replication worker for subscription "my_first_subscription"
2017-04-13 11:58:15.105 CEST - 1 - 19465 -  - @ LOG:  logical replication sync for subscription my_first_subscription, table t1 started
2017-04-13 11:59:03.373 CEST - 1 - 19082 -  - @ LOG:  checkpoint starting: xlog
2017-04-13 11:59:37.985 CEST - 2 - 19082 -  - @ LOG:  checkpoint complete: wrote 14062 buffers (85.8%); 1 transaction log file(s) added, 0 removed, 0 recycled; write=26.959 s, sync=2.291 s, total=34.740 s; sync files=13, longest=1.437 s, average=0.171 s; distance=405829 kB, estimate=405829 kB
2017-04-13 12:02:23.728 CEST - 2 - 19465 -  - @ LOG:  logical replication synchronization worker finished processing

On the publisher instance you get another process for sending the changes to the subscriber:

postgres 19464 18318  0 11:58 ?        00:00:00 postgres: PUBLISHER: wal sender process postgres ::1(41768) idle

Changes to the table on the publisher should now get replicated to the subscriber node:

postgres=# show port;
 port 
------
 6666
(1 row)
postgres=# insert into t1 (a,b) values (-1,'aaaaa');
INSERT 0 1
postgres=# update t1 set b='bbbbb' where a=-1;
UPDATE 1

On the subscriber node:

postgres=# show port;
 port 
------
 6667
(1 row)

postgres=# select * from t1 where a = -1;
 a  |   b   
----+-------
 -1 | aaaaa
(1 row)

postgres=# select * from t1 where a = -1;
 a  |   b   
----+-------
 -1 | bbbbb
(1 row)

As mentioned initially you can make the subscriber a publisher and the publisher a subscriber at the same time. So when we create this table on both instances:

create table t2 ( a int primary key );

Then create a publication on the subscriber node:

postgres=# create table t2 ( a int primary key );
CREATE TABLE
postgres=# show port;
 port 
------
 6667
(1 row)

postgres=# create publication my_second_publication for table t2;
CREATE PUBLICATION
postgres=# 

Then create the subscription to that on the publisher node:

postgres=# show port;
 port 
------
 6666
(1 row)

postgres=# create subscription my_second_subscription connection 'host=localhost port=6667 dbname=postgres user=postgres' publication my_second_publication;
CREATE SUBSCRIPTION

… we have a second logical replication the other way around:

postgres=# show port;
 port 
------
 6667
(1 row)
postgres=# insert into t2 values ( 1 );
INSERT 0 1
postgres=# insert into t2 values ( 2 );
INSERT 0 1
postgres=# 

On the other instance:

postgres=# show port;
 port 
------
 6666
(1 row)

postgres=# select * from t2;
 a 
---
 1
 2
(2 rows)

There are two new catalog views which give you information about subscriptions and publications:

postgres=# select * from pg_subscription;
 subdbid |        subname         | subowner | subenabled |                      subconninfo                       |      subslotname       |     subpublications     
---------+------------------------+----------+------------+--------------------------------------------------------+------------------------+-------------------------
   13216 | my_second_subscription |       10 | t          | host=localhost port=6667 dbname=postgres user=postgres | my_second_subscription | {my_second_publication}
(1 row)

postgres=# select * from pg_publication;
       pubname        | pubowner | puballtables | pubinsert | pubupdate | pubdelete 
----------------------+----------+--------------+-----------+-----------+-----------
 my_first_publication |       10 | f            | t         | t         | t
(1 row)

What a cool feature and so easy to use. Thanks to all who brought that into PostgreSQL 10, great work.

 

Cet article In-core logical replication will hit PostgreSQL 10 est apparu en premier sur Blog dbi services.

Error ORA-01033 After Doing a Switchover in a 12.1 RAC Environment

Pythian Group - Thu, 2017-04-13 08:29

The other day I did a switchover in a RAC environment , which went pretty smooth , but after doing the switchover in the primary, I kept getting the following error:

select dest_name,status,error from gv$archive_dest_status where dest_id=2;

DEST_NAME
--------------------------------------------------------------------------------
STATUS	  ERROR
--------- -----------------------------------------------------------------
LOG_ARCHIVE_DEST_2
ERROR	  ORA-01033: ORACLE initialization or shutdown in progress

LOG_ARCHIVE_DEST_2
ERROR	  ORA-01033: ORACLE initialization or shutdown in progress

I went and checked the standby, and saw the standby was in recover mode and waiting for the redo log

PROCESS STATUS	     CLIENT_P CLIENT_PID	  THREAD#	 SEQUENCE#	     BLOCK#    ACTIVE_AGENTS	 KNOWN_AGENTS
------- ------------ -------- ---------- ---------------- ---------------- ---------------- ---------------- ----------------
ARCH	CONNECTED    ARCH     44474			0		 0		  0		   0		    0
RFS	IDLE	     ARCH     133318			0		 0		  0		   0		    0
RFS	IDLE	     ARCH     50602			0		 0		  0		   0		    0
ARCH	CLOSING      ARCH     44470			1	     21623	      14336		   0		    0
ARCH	CLOSING      ARCH     44476			1	     21624		  1		   0		    0
ARCH	CLOSING      ARCH     44472			2	     19221	      96256		   0		    0
RFS	IDLE	     LGWR     133322			1	     21625	      17157		   0		    0
RFS	IDLE	     LGWR     50620			2	     19222	      36611		   0		    0
MRP0	WAIT_FOR_LOG N/A      N/A			2	     19222	      36617		  0		   0

My first train of thought was that the password file was incorrect, so I recreated them and copied them from the primary to the standby nodes, but I still kept getting the same error. I reviewed the environment with the scripts in DOC ID 1581388.1 and everything seemed alright. It really kept bugging me that the logs were not being applied even though the logs were being shipped to the standby (so it did have to do with the password file), but what really bothered me, was that I had just recreated the password file in $ORACLE_HOME/dbs and I still kept getting the same error.

So after a while of troubleshooting, I found that in the new primary the password file was residing in an ASM Diskgroup, and that was the main culprit. This meant that I had to copy the password file from the ASM diskgroup in the primary to the standby.
Primary

[oracle@localhost trace]$ srvctl config database -d renedb
Database unique name: renedb
Database name: 
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA1/renedb/spfilerenedb.ora
Password file: +DATA1/renedb/PASSWORD/pwrenedb
Domain: 
Start options: open
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: ARCH1,DATA1,REDO
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oper
Database instances: renedb1,renedb2
Configured nodes: localhost,localhost
Database is administrator managed
[oracle@localhost trace]$ exit
-bash-4.1$ sudo su - grid
[sudo] password for pythian: 
[grid@localhost ~]$ . oraenv
ORACLE_SID = [+ASM1] ? 
The Oracle base remains unchanged with value /u01/app/grid
[grid@localhost ~]$ asmcmd
ASMCMD> pwcopy +DATA1/renedb/PASSWORD/pwrenedb /tmp/pwrenedb
copying +DATA1/renedb/PASSWORD/pwrenedb -> /tmp/pwrenedb
ASMCMD> exit

Standby

[oracle@localhost dbs]$ scp 10.10.0.1:/tmp/pwrenedb /tmp/pwrenedb_stby
pwrenedb_stby_phdb                                                                                                                                                                                                    100% 7680     7.5KB/s   00:00    
[oracle@localhost dbs]$ exit
logout
[pythian@localhost ~]$ sudo su - grid
[sudo] password for pythian: 
Last login: Fri Mar 31 21:55:53 MST 2017
[grid@localhost ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
[grid@localhost ~]$ asmcmd
ASMCMD> mkdir DATA/RENEDB/PASSWORD
ASMCMD> pwcopy /tmp/pwrenedb_stby_phdb +DATA/RENEDB/PASSWORD/pwrenedb_stby
copying /tmp/pwrenedb_stby_phdb -> +DATA/RENEDB/PASSWORD/pwrenedb_stby
ASMCMD> exit
[grid@localhost ~]$ exit
logout
[pythian@localhost ~]$ sudo su - oracle
Last login: Sat Apr  1 01:35:46 MST 2017 on pts/4
The Oracle base has been set to /u01/app/oracle
[oracle@localhost dbs]$ srvctl modify database -d renedb_stby -pwfile +DATA/RENEDB/PASSWORD/pwrenedb_stby
[oracle@localhost dbs]$ srvctl config  database -d renedb_stby
Database unique name: renedb_stby
Database name: 
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: /u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfilerenedb_stby.ora
Password file: +DATA/RENEDB/PASSWORD/pwrenedb_stby
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: ARCH,DATA,REDO
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oper
Database instances: renedb_stby1,renedb_stby2
Configured nodes: *******,***********
Database is administrator managed

Once I did this, the standby started applying the redo logs and after the gap was closed the Primary switchover status was “TO STANDBY”
Primary

Primary Site last generated SCN

*******************************

DB_UNIQUE_NAME	SWITCHOVER_STATUS	  CURRENT_SCN
--------------- -------------------- ----------------
renedb	TO STANDBY		 134480468945

Standby

Data Guard Apply Lag

********************

NAME	     LAG_TIME		  DATUM_TIME	       TIME_COMPUTED
------------ -------------------- -------------------- --------------------
apply lag    +00 00:00:00	  04/01/2017 04:05:51  04/01/2017 04:05:52

1 row selected.


Data Guard Gap Problems

***********************

no rows selected

PROCESS STATUS	     CLIENT_P CLIENT_PID	  THREAD#	 SEQUENCE#	     BLOCK#    ACTIVE_AGENTS	 KNOWN_AGENTS
------- ------------ -------- ---------- ---------------- ---------------- ---------------- ---------------- ----------------
ARCH	CONNECTED    ARCH     44474			0		 0		  0		   0		    0
RFS	IDLE	     ARCH     133318			0		 0		  0		   0		    0
RFS	IDLE	     ARCH     50602			0		 0		  0		   0		    0
ARCH	CLOSING      ARCH     44470			1	     21623	      14336		   0		    0
ARCH	CLOSING      ARCH     44476			1	     21624		  1		   0		    0
ARCH	CLOSING      ARCH     44472			2	     19221	      96256		   0		    0
RFS	IDLE	     LGWR     133322			1	     21625	      17157		   0		    0
RFS	IDLE	     LGWR     50620			2	     19222	      36611		   0		    0
MRP0	APPLYING_LOG N/A      N/A			2	     19222	      36617		  33		   33

9 rows selected.

Conclusion
In 12.1 it is recommended as per DOC ID 1984091.1, to have the password file in ASM diskgroups. So once I did this, I was able to workaround error ORA-01033 and able to sleep well!

Note.-  This was originally published in rene-ace.com

Categories: DBA Blogs

DB Auditing and ORDS

Kris Rice - Thu, 2017-04-13 08:09
There seems to be some confusion around how ORDS works with it's connection pooling yet running the REST call as the specified schema. The connection pool Consider a 50 PDB env and concurrent users per PDB running some REST stuff.  Using a connection pool per PDB would be 50 connection pools.  Then if a JET app ( or any HTML5/JS/.. ) is making REST calls Chrome will do this with 6 concurrent

Analyzing the right data

DBMS2 - Thu, 2017-04-13 07:05

0. A huge fraction of what’s important in analytics amounts to making sure that you are analyzing the right data. To a large extent, “the right data” means “the right subset of your data”.

1. In line with that theme:

  • Relational query languages, at their core, subset data. Yes, they all also do arithmetic, and many do more math or other processing than just that. But it all starts with the set theory.
  • Underscoring the power of this approach, other data architectures over which analytics is done usually wind up with SQL or “SQL-like” language access as well.

2. Business intelligence interfaces today don’t look that different from what we had in the 1980s or 1990s. The biggest visible* changes, in my opinion, have been in the realm of better drilldown, ala QlikView and then Tableau. Drilldown, of course, is the main UI for business analysts and end users to subset data themselves.

*I used the word “visible” on purpose. The advances at the back end have been enormous, and much of that redounds to the benefit of BI.

3. I wrote 2 1/2 years ago that sophisticated predictive modeling commonly fit the template:

  • Divide your data into clusters.
  • Model each cluster separately.

That continues to be tough work. Attempts to productize shortcuts have not caught fire.

4. In an example of the previous point, anomaly management technology can, in theory, help shortcut any type of analytics, in that it tries to identify what parts of your data to focus on (and why). But it’s in its early days; none of the approaches to general anomaly management has gained much traction.

5. Marketers have vast amounts of information about us. It starts with every credit card transaction line item and a whole lot of web clicks. But it’s not clear how many of those (10s of) thousands of columns of data they actually use.

6. In some cases, the “right” amount of data to use may actually be tiny. Indeed, some statisticians claim that fewer than 10 data points may be enough to get a good model. I’m skeptical, at least as to the practical significance of such extreme figures. But on the more plausible side — if you’re hunting bad guys, it may not take very many separate facts before you have good evidence of collusion or fraud.

Internet fraud excepted, of course. Identifying that usually involves sifting through a lot of log entries.

7. All the needle-hunting in the world won’t help you unless what you seek is in the haystack somewhere.

  • Often, enterprises explicitly invest in getting more data.
  • Keeping everything you already generate is the obvious choice for most categories of data, but some of the lowest-value-per-bit logs may forever be thrown away.

8. Google is famously in the camp that there’s no such thing as too much data to analyze. For example, it famously uses >500 “signals” in judging the quality of potential search results. I don’t know how many separate data sources those signals are informed by, but surely there are a lot.

9. Few predictive modeling users demonstrate a need for vast data scaling. My support for that claim is a lot of anecdata. In particular:

  • Some predictive modeling techniques scale well. Some scale poorly. The level of pain around the “scale poorly” aspects of that seems to be fairly light (or “moderate” at worst). For example:
    • In the previous technology generation, analytic DBMS and data warehouse appliance vendors tried hard to make statistical packages scale across their systems. Success was limited. Nobody seemed terribly upset.
    • Cloudera’s Data Science Workbench messaging isn’t really scaling-centric.
  • Spark’s success in machine learning is rather rarely portrayed as centering on scaling. And even when it is, Spark basically runs in memory, so each Spark node is processing all that much data.

10. Somewhere in this post — i.e. right here :) — let’s acknowledge that the right data to analyze may not be exactly what was initially stored. Data munging/wrangling/cleaning/preparation is often a big deal. Complicated forms of derived data can be important too.

11. Let’s also mention data marts. Basically, data marts subset and copy data, because the data will be easier to analyze in its copied form, or because they want to separate workloads between the original and copied data store.

  • If we assume the data is on spinning disks or even flash, then the need for that strategy declined long ago.
  • Suppose you want to keep data entirely in memory? Then you might indeed want to subset-and-copy it. But with so many memory-centric systems doing decent jobs of persistent storage too, there’s often a viable whole-dataset management alternative.

But notwithstanding the foregoing:

  • Security/access control can be a good reason for subset-and-copy.
  • So can other kinds of administrative simplification.

12. So what does this all suggest going forward? I believe:

  • Drilldown is and will remain central to BI. If your BI doesn’t support robust drilldown, you’re doing it wrong. “Real-time” use cases are not exceptions to this rule.
  • In a strong overlap with the previous point, drilldown is and will remain central to monitoring. Whatever monitoring means to you, the ability to pinpoint the specific source of interesting signals is crucial.
  • The previous point can be recast as saying that it’s crucial to identify, isolate and explain anomalies. Some version(s) of anomaly management will become a big deal.
  • SQL and “SQL-like” languages will remain integral to analytic processing for a long time.
  • Memory-centric analytic frameworks such as Spark will continue to win. The data size constraints imposed by memory-centric processing will rarely cause difficulties.

Related links

Categories: Other

Spring Boot Application for Pivotal Cloud Cache Service

Pas Apicella - Thu, 2017-04-13 06:21
I previously blogged about the Pivotal Cloud Cache service in Pivotal Cloud Foundry as follows

http://theblasfrompas.blogspot.com.au/2017/04/getting-started-with-pivotal-cloud.html

During that post I promised it will follow with a Spring Boot application which would use the PCC service to show what the code would look like. That demo exists at the GitHub URL below.

https://github.com/papicella/SpringBootPCCDemo

The GitHub URL above shows how you can clone , package and then push this application to PCF using your own PCC service instance using the "Spring Cloud GemFire Connector"



More Information

Pivotal Cloud Cache Docs
http://docs.pivotal.io/p-cloud-cache/index.html



Categories: Fusion Middleware

8 + 1 = 9, yes, true, but …

Yann Neuhaus - Thu, 2017-04-13 04:14

dbca_mb_1

Btw: If you really would do that (the screen shot is from 12.1.0.2):

SQL> alter system set sga_target=210m scope=spfile;

System altered.

SQL> alter system set sga_max_size=210m scope=spfile;

System altered.

SQL> alter system set pga_aggregate_target=16m scope=spfile;

System altered.

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE	12.1.0.2.0	Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

SQL> startup force
ORA-00821: Specified value of sga_target 212M is too small, needs to be at least 320M
SQL> 

The same for 12.2.0.1:

SQL> alter system set sga_target=210m scope=spfile;

System altered.

SQL> alter system set sga_max_size=210m scope=spfile;

System altered.

SQL> alter system set pga_aggregate_target=16m scope=spfile;

System altered.

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE	12.2.0.1.0	Production
TNS for Linux: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 - Production

SQL> startup force
ORA-00821: Specified value of sga_target 212M is too small, needs to be at least 468M
ORA-01078: failure in processing system parameters
SQL> 

To close this post here is another one that caught my eye yesterday:
solarisx64_2

Seems I totally missed that there was a x64 version of Solaris 8 and 9 :)

 

Cet article 8 + 1 = 9, yes, true, but … est apparu en premier sur Blog dbi services.

Oracle 12c – Why you shouldn’t do a crosscheck archivelog all in your regular RMAN backup scripts

Yann Neuhaus - Thu, 2017-04-13 02:28

Crosschecking in RMAN is quite cool stuff. With the RMAN crosscheck you can update an outdated RMAN repository about backups or archivelogs whose repository records do not match their physical status.

For example, if a user removes archived logs from disk with an operating system command, the repository (RMAN controlfile or RMAN catalog) still indicates that the logs are on disk, when in fact they are not. It is important to know, that the RMAN CROSSCHECK command never deletes any operating system files or removes any repository records, it just updates the repository with the correct information. In case you really want to delete something, you must use the DELETE command for these operations.

Manually removing archived logs or anything else out of the fast recovery area is something you should never do, however, in reality it still happens.

But when it happens, you want know which files are not on their physical location. So why not running a crosscheck archivelog all regularly in your backup scripts? Is it not a good idea?

From my point of view it is not. For two reason:

  • Your backup script runs slower because you do an extra step
  • But for and foremost you will not notice if an archived log is missing

Let’s run a little test case. I simply move one archived log away and run the backup archivelog all command afterwards.

oracle@dbidg03:/u03/fast_recovery_area/CDB/archivelog/2017_03_30/ [CDB (CDB$ROOT)] mv o1_mf_1_61_dfso8r7p_.arc o1_mf_1_61_dfso8r7p_.arc.20170413a

RMAN> backup archivelog all;

Starting backup at 13-APR-2017 08:03:14
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=281 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=44 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 04/13/2017 08:03:17
RMAN-06059: expected archived log not found, loss of archived log compromises recoverability
ORA-19625: error identifying file /u03/fast_recovery_area/CDB/archivelog/2017_03_30/o1_mf_1_61_dfso8r7p_.arc
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7

This is exactly what I have expected. I want to have a clear error message in case an archived log is missing. I don’t want Oracle to skip over it and just continue as if nothing has happened. But what happens if I run a crosscheck archivelog all before running my backup command?

RMAN> crosscheck archivelog all;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=281 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=44 device type=DISK
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_28/o1_mf_1_56_dfmzywt1_.arc RECID=73 STAMP=939802622
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_28/o1_mf_1_57_dfo40o1g_.arc RECID=74 STAMP=939839542
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_29/o1_mf_1_58_dfovy7cj_.arc RECID=75 STAMP=939864041
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_29/o1_mf_1_59_dfq7pcwz_.arc RECID=76 STAMP=939908847
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_30/o1_mf_1_60_dfrg8f8o_.arc RECID=77 STAMP=939948334
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_31/o1_mf_1_62_dfv0kybr_.arc RECID=79 STAMP=940032607
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_31/o1_mf_1_63_dfw5s2l8_.arc RECID=80 STAMP=940070724
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_04_12/o1_mf_1_64_dgw5mgsl_.arc RECID=81 STAMP=941119119
validation succeeded for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_04_13/o1_mf_1_65_dgy552z0_.arc RECID=82 STAMP=941184196
Crosschecked 9 objects

validation failed for archived log
archived log file name=/u03/fast_recovery_area/CDB/archivelog/2017_03_30/o1_mf_1_61_dfso8r7p_.arc RECID=78 STAMP=939988281
Crosschecked 1 objects
RMAN>

The crosscheck validation failed for the archived log which I have moved beforehand. Perfect, the crosscheck has found the issue.

RMAN> list expired backup;

specification does not match any backup in the repository

RMAN> list expired archivelog all;

List of Archived Log Copies for database with db_unique_name CDB
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
78      1    61      X 30-MAR-2017 00:45:33
        Name: /u03/fast_recovery_area/CDB/archivelog/2017_03_30/o1_mf_1_61_dfso8r7p_.arc

However, If I run the backup archivelog all afterwards, RMAN continues as if nothing has ever happened, and in case you are not monitoring expired archived logs or backups, you will never notice it.

RMAN> backup archivelog all;

Starting backup at 13-APR-2017 08:05:01
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=56 RECID=73 STAMP=939802622
input archived log thread=1 sequence=57 RECID=74 STAMP=939839542
input archived log thread=1 sequence=58 RECID=75 STAMP=939864041
input archived log thread=1 sequence=59 RECID=76 STAMP=939908847
input archived log thread=1 sequence=60 RECID=77 STAMP=939948334
channel ORA_DISK_1: starting piece 1 at 13-APR-2017 08:05:01
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=62 RECID=79 STAMP=940032607
input archived log thread=1 sequence=63 RECID=80 STAMP=940070724
input archived log thread=1 sequence=64 RECID=81 STAMP=941119119
input archived log thread=1 sequence=65 RECID=82 STAMP=941184196
input archived log thread=1 sequence=66 RECID=83 STAMP=941184301
channel ORA_DISK_2: starting piece 1 at 13-APR-2017 08:05:01
channel ORA_DISK_2: finished piece 1 at 13-APR-2017 08:05:47
piece handle=/u03/fast_recovery_area/CDB/backupset/2017_04_13/o1_mf_annnn_TAG20170413T080501_dgy58fz7_.bkp tag=TAG20170413T080501 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:46
channel ORA_DISK_1: finished piece 1 at 13-APR-2017 08:06:07
piece handle=/u03/fast_recovery_area/CDB/backupset/2017_04_13/o1_mf_annnn_TAG20170413T080501_dgy58fy4_.bkp tag=TAG20170413T080501 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:06
Finished backup at 13-APR-2017 08:06:07

Starting Control File and SPFILE Autobackup at 13-APR-2017 08:06:07
piece handle=/u03/fast_recovery_area/CDB/autobackup/2017_04_13/o1_mf_s_941184367_dgy5bh7w_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 13-APR-2017 08:06:08

RMAN>

But is this really what I want? Probably not. Whenever an archived log is missing, RMAN should stop right away and throw an error message. This gives me the chance to check what was going wrong and the possibility to correct it.

Conclusion

I don’t recommend to run the crosscheck archivelog all in your regular RMAN backup scripts. This is a command that should be run manually in case it is needed. You just make your backup slower (ok, not too much but still), and you will probably never notice when an archived log is missing, which can lead to a database which can only be recovered to the point before the missing archived log.

 

Cet article Oracle 12c – Why you shouldn’t do a crosscheck archivelog all in your regular RMAN backup scripts est apparu en premier sur Blog dbi services.

Using WebLogic 12C RESTful interface to query a WebLogic Domain configuration

Yann Neuhaus - Thu, 2017-04-13 00:12

WebLogic 12.2.1 provides a new REST management interface with full accesses to all WebLogic Server resources.
This new interface provides an alternative to the WLST scripting or JMX developments for management and monitoring of WebLogic Domains.
This blog explains how the RESTful interface can be used to determine a WebLogic Domain configuration and display it’s the principals attributes.

For this purpose, a Search RESTful call will be used.
The RESTful URL to point to the search is: http://vm01.dbi-workshop.com:7001/management/weblogic/latest/edit/search
This search RESTful URL points to the root of the WebLogic Domain configuration managed beans tree.

The search call is a HTTP POST and requires a json structure to define the resources we are looking for.

{
    links: [],
    fields: [ 'name', 'configurationVersion' ],
    children: {
        servers: {
            links: [],
            fields: [ 'name','listenAddress','listenPort','machine','cluster' ],
            children: {
                SSL: {
                    fields: [ 'enabled','listenPort' ], links: []
                }
            }
        }
    }
}

The json structure above defines the search attributes that is provided in the HTTP POST.
This command searches for the WebLogic Domain name and Version.
Then for the servers in the children’s list for which it prints the name, listen port, machine name and cluster name if this server is member of a cluster. In the servers childrens list, it looks for the SSL entry and displays the SSL listen Port.

To execute this REST url from the Unix command line, we will use the Unix curl command:

curl -g --user monitor:******** -H X-Requested-By:MyClient -H Accept:application/json -H Content-Type:application/json -d "{ links: [], fields: [ 'name', 'configurationVersion' ], children: { servers: { links: [], fields: [ 'name', 'listenPort','machine','cluster' ], children: { SSL: { fields: [ 'listenPort' ], links: [] }} } } }" -X POST http://vm01.dbi-workshop.com:7001/management/weblogic/latest/edit/search

Below is a sample of the results provided by such command execution:

{
    "configurationVersion": "12.2.1.0.0",
    "name": "base_domain",
    "servers": {"items": [
    {
          "listenAddress": "vm01.dbi-workshop.com",
          "name": "AdminServer",
          "listenPort": 7001,
          "cluster": null,
          "machine": [
                 "machines",
                 "machine1"
          ],
          "SSL": {
                 "enabled": true,
                 "listenPort": 7002
          }
   },
   {
          "listenAddress": "vm01.dbi-workshop.com",
          "name": "server1",
          "listenPort": 7003,
          "cluster": [
                 "clusters",
                 "cluster1"
          ],
          "machine": [
                 "machines",
                 "machine1"
          ],
          "SSL": {
                 "enabled": false,
                 "listenPort": 7013
          }
  },
  {
          "listenAddress": "vm01.dbi-workshop.com",
          "name": "server2",
          "listenPort": 7004,
          "cluster": [
                "clusters",
                "cluster1"
          ],
          "machine": [
                "machines",
                "machine1"
          ],
          "SSL": {
                "enabled": false,
                "listenPort": 7014
          }
  },
  {
         "listenAddress": "vm01.dbi-workshop.com",
         "name": "server3",
         "listenPort": 7005,
         "cluster": null,
         "machine": [
                "machines",
                "machine1"
         ],
         "SSL": {
               "enabled": false,
               "listenPort": 7015
         }
  }
]}
 

Cet article Using WebLogic 12C RESTful interface to query a WebLogic Domain configuration est apparu en premier sur Blog dbi services.

My Vagrant Journey

Michael Dinh - Wed, 2017-04-12 23:09

This is probably nothing new.

Wanted to build my own Vagrant box with prerequisite to install GI/DB.

Instead of creating new network, use existing to assign IP.

Shared Folder uses existing location versus having to copy binaries into Vagrant location.

Same Vagrantfile can be used with a little search and replace.

You might ask, why not automate GI/DB install. I like to practice installating and cloning.

Next, install GG and DG.

After, create 2 RAC clusters using the same Vagrant Box?

dinh@CMWPHV1 MINGW64 /f/Vagrant
$ vboxmanage list hostonlyifs
Name:            VirtualBox Host-Only Ethernet Adapter
GUID:            8898fc55-9a80-4d5e-9a82-d2dc776ef00e
DHCP:            Disabled
IPAddress:       192.168.146.1
NetworkMask:     255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:22
MediumType:      Ethernet
Status:          Up
VBoxNetworkName: HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter

dinh@CMWPHV1 MINGW64 /f/Vagrant
$ vagrant box list
There are no installed boxes! Use `vagrant box add` to add some.

dinh@CMWPHV1 MINGW64 /f/Vagrant
$ vagrant global-status
id       name   provider state  directory
--------------------------------------------------------------------
There are no active Vagrant environments on this computer! Or,
you haven't destroyed and recreated Vagrant environments that were
started with an older version of Vagrant.

dinh@CMWPHV1 MINGW64 /f/Vagrant
$ cd arrow1/

dinh@CMWPHV1 MINGW64 /f/Vagrant/arrow1
$ vagrant status
Current machine states:

arrow1                    not created (virtualbox)

The environment has not yet been created. Run `vagrant up` to
create the environment. If a machine is not created, only the
default provider will be shown. So if a provider is not listed,
then the machine is not created for that environment.

dinh@CMWPHV1 MINGW64 /f/Vagrant/arrow1
$ vagrant up
Bringing machine 'arrow1' up with 'virtualbox' provider...
==> arrow1: Box 'ol73-min' could not be found. Attempting to find and install...
    arrow1: Box Provider: virtualbox
    arrow1: Box Version: >= 0
==> arrow1: Box file was not detected as metadata. Adding it directly...
==> arrow1: Adding box 'ol73-min' (v0) for provider: virtualbox
    arrow1: Unpacking necessary files from: file:///F:/Vagrant/ol73-min.box
    arrow1:
==> arrow1: Successfully added box 'ol73-min' (v0) for 'virtualbox'!
==> arrow1: Importing base box 'ol73-min'...
==> arrow1: Matching MAC address for NAT networking...
==> arrow1: Setting the name of the VM: arrow1
==> arrow1: Clearing any previously set network interfaces...
==> arrow1: Preparing network interfaces based on configuration...
    arrow1: Adapter 1: nat
    arrow1: Adapter 2: hostonly
==> arrow1: Forwarding ports...
    arrow1: 22 (guest) => 2011 (host) (adapter 1)
==> arrow1: Running 'pre-boot' VM customizations...
==> arrow1: Booting VM...
==> arrow1: Waiting for machine to boot. This may take a few minutes...
    arrow1: SSH address: 127.0.0.1:2011
    arrow1: SSH username: vagrant
    arrow1: SSH auth method: private key
    arrow1: Warning: Remote connection disconnect. Retrying...
==> arrow1: Machine booted and ready!
[arrow1] GuestAdditions 5.1.18 running --- OK.
==> arrow1: Checking for guest additions in VM...
==> arrow1: Setting hostname...
==> arrow1: Configuring and enabling network interfaces...
==> arrow1: Mounting shared folders...
    arrow1: /vagrant => F:/Vagrant/arrow1
    arrow1: /sf_working => C:/dinh/Dropbox/working
    arrow1: /sf_OracleSoftware => F:/OracleSoftware

dinh@CMWPHV1 MINGW64 /f/Vagrant/arrow1
$ cd ../arrow2/

dinh@CMWPHV1 MINGW64 /f/Vagrant/arrow2
$ vagrant up
Bringing machine 'arrow2' up with 'virtualbox' provider...
==> arrow2: Importing base box 'ol73-min'...
==> arrow2: Matching MAC address for NAT networking...
==> arrow2: Setting the name of the VM: arrow2
==> arrow2: Clearing any previously set network interfaces...
==> arrow2: Preparing network interfaces based on configuration...
    arrow2: Adapter 1: nat
    arrow2: Adapter 2: hostonly
==> arrow2: Forwarding ports...
    arrow2: 22 (guest) => 2012 (host) (adapter 1)
==> arrow2: Running 'pre-boot' VM customizations...
==> arrow2: Booting VM...
==> arrow2: Waiting for machine to boot. This may take a few minutes...
    arrow2: SSH address: 127.0.0.1:2012
    arrow2: SSH username: vagrant
    arrow2: SSH auth method: private key
    arrow2: Warning: Remote connection disconnect. Retrying...
==> arrow2: Machine booted and ready!
[arrow2] GuestAdditions 5.1.18 running --- OK.
==> arrow2: Checking for guest additions in VM...
==> arrow2: Setting hostname...
==> arrow2: Configuring and enabling network interfaces...
==> arrow2: Mounting shared folders...
    arrow2: /vagrant => F:/Vagrant/arrow2
    arrow2: /sf_working => C:/dinh/Dropbox/working
    arrow2: /sf_OracleSoftware => F:/OracleSoftware

dinh@CMWPHV1 MINGW64 /f/Vagrant/arrow2
$ vagrant global-status
id       name   provider   state   directory
-----------------------------------------------------------------------
e779ae1  arrow1 virtualbox running F:/Vagrant/arrow1
7642809  arrow2 virtualbox running F:/Vagrant/arrow2

The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date. To interact with any of the machines, you can go to
that directory and run Vagrant, or you can use the ID directly
with Vagrant commands from any directory. For example:
"vagrant destroy 1a2b3c4d"

dinh@CMWPHV1 MINGW64 /f/Vagrant/arrow2
$ vagrant box list
ol73-min (virtualbox, 0)

dinh@CMWPHV1 MINGW64 /f/Vagrant/arrow2
$ vboxmanage list runningvms
"arrow1" {d8e472d1-92c1-4211-ac86-99a8461f7cf5}
"arrow2" {66ad90a6-15ad-4096-a04e-e30c40ad2d70}

dinh@CMWPHV1 MINGW64 /f/Vagrant/arrow2
$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

# vboxmanage list vms
# vagrant package --output ol73-min.box --base ol73-min
# vagrant box add --name arrow1 file:///F:/Vagrant/ol73-min.box
# vboxmanage sharedfolder add "arrow1" --name "OracleSoftware" --hostpath "F:\OracleSoftware" --automount
# vboxmanage modifyvm "ol73-min" --natpf1 "ssh,tcp,,2222,,22"

VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.define "arrow2" , primary: true do |config|
    config.vm.box = "ol73-min"
    config.vm.box_url = "file:///F:/Vagrant/ol73-min.box"
    config.vm.network "private_network", ip: "192.168.146.12"
    config.vm.network "forwarded_port", guest: 22, host: 2012, host_ip: "127.0.0.1", id: "ssh"
    config.vm.box_check_update = false
    config.ssh.insert_key = false
    config.vm.host_name = "arrow2"
    config.vm.synced_folder "F:\\OracleSoftware", "/sf_OracleSoftware", type: "nfs"
    config.vm.synced_folder "C:\\dinh\\Dropbox\\working", "/sf_working", type: "nfs"
    config.vm.provider "virtualbox" do |vb|
      #vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      #vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
      vb.memory = "1536"
      vb.name = "arrow2"
    end
  end
end

dinh@CMWPHV1 MINGW64 /f/Vagrant/arrow2
$

Production Mobile App with Oracle JET Hybrid in 2 Hours

Andrejus Baranovski - Wed, 2017-04-12 22:58
Do you want to know a secret, how to build mobile application just in 2 hours? Use Oracle JET Hybrid. Beauty of Oracle JET Hybrid - you can reuse the same source code (HTML and JS) from regular JET application. JET UI is responsive out of the box and this allows to render JET screens from the Web on mobile device without changes.

We were using our JET 3.0.0 production app - Red Samurai and Oracle PaaS JCS Success - JET/ADF BC REST Cloud Production Application and created mobile app version. This process took around 2 hours.

Below I will list steps required to create JET Hybrid mobile app out of existing JET app.

1. Execute sudo npm -g install cordova to add Cordova to JET tooling

2. Execute sudo yo oraclejet:hybrid --platforms=android to create new JET Hybrid application. Windows and iOS are supported as well

3. Copy HTML and JS files from src folder of JET app into src folder of JET Hybrid app. Structure of JET Hybrid app with HTML and JS files:


4. Execute sudo grunt build --platform=android to compile JET Hybrid app

5. Execute sudo grunt serve --platform=android --destination=device to deploy JET Hybrid app to mobile device. If you are deploying to Android, you need to have Android tools installed on your machine

Dashboard screen on mobile device, built with JET data visualization components:


Customer setup screen contains JET search list and various input components with validation:


JET dialog looks good in mobile too, with input number fields:


JET input date component on mobile screen:


JET table with pagination is rendered very well and is easily usable on the small screen:


I should say - we are very happy with JET Hybrid functionality. It allows to reuse JET application code and build mobile app very fast.

Welcome to M|17

Yann Neuhaus - Wed, 2017-04-12 20:00

m17bannernew

Welcome to the MariaDB’s first user conference

On the 11th, started at 09:00 this big event at the Conrad Hotel in New York, closed to the One World Trade Center
After the short registration process where we received a full bag of goodies (mobilephone lens,Jolt charger, cap,note block,…)
we could choose between 3 workshops.
– Scaling and Securing MariaDB for High Availability
– MariaDB ColumnStore for High Performance Analytics
– Building Modern Applications with MariaDB

I decided to go to the first one presented by Michael de Groot, technical consultant at MariaDB.
After a theoritical introduction of the detailled MariaDB cluster technology and mechanisms (around 40 slides) we had to build up from scratch a MariaDB cluster composed of 4 nodes and I have to admit that this exercise was well prepared as we had just to follow the displayed instructions on the screen.
At the end that means 12:30, almost everybody had deployed the MariaDB cluster and was able to use and manage it.

Afterwards, it was time to get lunch. A big buffet of salads and sandwiches was waiting for us.
It was really nice because we could meet all people as Peter Zaitsev (Percona’s CEO) in a cool and relax atmosphere.

Welcome-mariadb
Atfter lunch, a keynote was delivered by MariaDB CEO Michael Howard in the biggest conference room of the hotel where around 400 people were present.
He mainly talked about the strategic orientation of MariaDB in the Open Source world for the next coming years.
Unfortunately the air conditioning was too cool and a lot of people started sneezing, even I and I had to keep my jacket all the time.

Then, a special guest speaker called Joan Tay Kim Choo, Executive Director of Technology Operations at DBS Bank, talked about their success story.
How they migrated all their databases from Oracle Enterprise and DB2 to MariaDB.

Roger Bodamer, MariaDB Chief Product Officer, then had also his keynote session.
It was really interresting because he discussed about how MariaDB will exploit the fundamental architectural changes in the cloud and how MariaDB will enable both OLTP and Analytical use cases for enterprises at any scale.

Finally, at five started the Welcome Reception and Technology Pavilion, in other words a small party.
Good music, good red wines (Cabernet was really good), good atmosphere.
we could meet all speakers and I had the chance to meet Michael Widenius alias “Monty”, founder of the MySQL Server, a great moment for me.
He gracefully accepted and several times because the light was really bad to take pictures with me.
MontySaid2

Around 18:30, the party was almost over, I was still here, one of the last guest finishing my glass of cabernet, thinking of tomorrow, the second day of this event and all the sessions I planned to see.

 

Cet article Welcome to M|17 est apparu en premier sur Blog dbi services.

Unable MERGE records on TABLE_NAME@DBLink

Tom Kyte - Wed, 2017-04-12 17:26
Hi Chris/Connor, TYPE tab_Varchar2 IS TABLE OF VARCHAR2(500); Before executing MERGE we are fetching ROWIDs of TB_ORDERS in chunk using BULK COLLECT LIMIT 5000; DBLINK is created which is poitning to Archival DB, and we need to MERGE records...
Categories: DBA Blogs

Schedule job "B" to run after job "A" completes and prevent job "A" from running again until job "B" completes.

Tom Kyte - Wed, 2017-04-12 17:26
AskTom, I have an Oracle Scheduled job called "A" that runs every 5 minutes and takes 10 seconds to complete but could run longer. I have a second job called "B" that I want to run at 1 am daily. I don't want job "B" to run at the same time as j...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator