Feed aggregator

Backup and Restore records from DB Objects (Table)

Tom Kyte - Sun, 2018-06-24 15:26
Wanted to check if there is some easy way to create a backup of the tables/schema and use some script that will load back the data from the backup file. I have DBCS access so can run the sh script as well. Please let me know if there is a simpl...
Categories: DBA Blogs

Spot the diffs between two database schemas inside SQL and PL/SQL code

Tom Kyte - Sun, 2018-06-24 15:26
Hi : I have two Oracle databases (say, PROD and TEST), and in a given schema (present in both DBs) I must assure that the same SQL code (inside views) and/or PL/SQL code (triggers, procs, funcs, packages) exists, disconsidering the non-functional dif...
Categories: DBA Blogs

Making API calls from Oracle database

Tom Kyte - Sun, 2018-06-24 15:26
In our office, Environment 1: we have an oracle database with Oracle APEX installed. Environment 2: We have a PCI complaint application with some NON-PCI APIs exposed via Kong We want to call the Non-PCI APIs from oracle database and were to...
Categories: DBA Blogs

Cancelling long running queries

Tom Kyte - Sun, 2018-06-24 15:26
Dear Tom, I'm wondering about how a 'cancel' in Oracle works. For example, I have a long running query and I define a timeout on application level. Once the timeout is reached, the application sends a 'cancel'. What i observed is, that the canc...
Categories: DBA Blogs

materialized view problem while refreshing

Tom Kyte - Sun, 2018-06-24 15:26
Hi We have have an ORACLE 8.1.7 database on suse linux 7.2 and we have a materialized view with joins and created a primary key constraint on the mview. The refresh mode and refresh type of the created mview is refresh fast on demand. ...
Categories: DBA Blogs

18c: No Active Data Guard required (and detected) when only CDB$ROOT and PDB$SEED are opened in read-only

Yann Neuhaus - Sun, 2018-06-24 14:32

When you clone a PDB on the primary CDB, you want that the same is automatically done on the standby ones. Unfortunately, for this operation, the CDB must be opened read-only. In 12.1 even the PDB needed to be opened read-only (Ludovico Caldara reported this in his blog). This, as far as I know, was fixed in 12.2 where MRP reads the files without the need to have the PDB opened. But another problem comes with online cloning, as reported by Alain Fuhrer in his blog, where the CDB needs to be opened read-only with real-time apply. This again requires the Active Data Guard option, which is then mandatory to use all power from the multitenant option. This brings the use of multitenant from x1.36 the price for this option to x1.61 for both, on the primary and the standby servers. Hopefully, this has been addressed in 18c where you don’t need the Active Data Guard to have the CDB opened in read-only with real-time apply, as long as you ensure that the user PDB are always closed when apply is on.

This test was done on the Oracle Cloud DBaaS because it is very easy to provision a Data Guard configuration. It is 18.1 because this is what is created and the patch apply for 18.2 is not so easy. But behaviour the same in 18.2

CDB in MOUNT do not requires ADG

The standby is not opened:

SQL> select dbid,name,open_mode,database_role from v$database;
 
DBID NAME OPEN_MODE DATABASE_ROLE
---------- --------- -------------------- ----------------
1507386248 ORCL MOUNTED PHYSICAL STANDBY
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED MOUNTED
4 PDB1 MOUNTED

In the primary, we see that the standby destination is in real time apply, but not opened:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status;
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL MOUNTED-STANDBY MANAGED REAL TIME APPLY
3 LOG_ARCHIVE_DEST_3 INACTIVE LOCAL UNKNOWN IDLE
4 LOG_ARCHIVE_DEST_4 INACTIVE LOCAL UNKNOWN IDLE

This is the information that is logged into DBA_FEATURE_USAGE_STATISTICS. Here is the query that counts the usage of real-time query which is one of the features that require ADG option:

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%QUERY' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
0

Here we have no ADG usage recorded because the database is not opened read-only. But if you try some PDB cloning on the primary you will quickly encounter the problems: the PDB on the standby has no datafiles.

CDB in READ and PDB closed do not require ADG in 18c

I’ll now open the standby read-only:

SQL> alter database open;
Database altered.
 
SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ ONLY NO

Unfortunately, this has also opened the PDB because the saved state was OPEN for the primary database and we cannot manage that properly on the standby (except with Grid Infrastructure or Oracle Restart or custom scripts or database triggers). I close it quickly before the ADG usage is recorded:

SQL> alter pluggable database pdb1 close;
Pluggable database altered.

So, my standby is opened read-only with real-time apply:

SQL> select dbid,name,open_mode,database_role from v$database;
 
DBID NAME OPEN_MODE DATABASE_ROLE
---------- --------- -------------------- ----------------
1507386248 ORCL READ ONLY WITH APPLY PHYSICAL STANDBY

but all the user PDBs are not opened:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

Here is what we see from the primary about this archive destination:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status;
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED REAL TIME APPLY
3 LOG_ARCHIVE_DEST_3 INACTIVE LOCAL UNKNOWN IDLE
4 LOG_ARCHIVE_DEST_4 INACTIVE LOCAL UNKNOWN IDLE

This in 12.2 would have recorded the usage of real-time query usage because the standby is opened for queries and we are in managed recovery mode. Here was the 12.2 query to detect it (you can see it in catfusrg.sql):

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
1

But the pattern has been changed in 18c to include ‘QUERY':

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%QUERY' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
0

and this ‘QUERY’ word is added only when some user pluggable databases are not in MOUNT but opened for queries.

I check quickly that a PDB clone on the primary:

SQL> create pluggable database PDB2 from PDB1 keystore identified by "Ach1z0#d";
Pluggable database created.
 
SQL> alter pluggable database PDB2 open;
Pluggable database altered.

has the clone created on the standby, in closed mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED
5 PDB2 MOUNTED

So, finally, we have a good way to use multitenant features without the need for Active Data Guard. This is good for DBaaS where we want easy cloning, protect their availability with Data Guard, but with no need to query them on the standby.

CDB in READ and PDB opened requires ADG in 18c

Now, if I open the PDB:

SQL> select dbid,name,open_mode,database_role from v$database;
 
DBID NAME OPEN_MODE DATABASE_ROLE
---------- --------- -------------------- ----------------
1507386248 ORCL READ ONLY WITH APPLY PHYSICAL STANDBY
 
SQL> alter pluggable database pdb1 open;
 
Pluggable database PDB1 altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ ONLY NO

From the primary, the recovery mode for this standby mentions ‘WITH QUERY’ for the managed real time apply:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status;
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED REAL TIME APPLY WITH QUERY
3 LOG_ARCHIVE_DEST_3 INACTIVE LOCAL UNKNOWN IDLE
4 LOG_ARCHIVE_DEST_4 INACTIVE LOCAL UNKNOWN IDLE

And only in this case the ADG usage is recorded:

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%QUERY' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
1

The proper way to open without activating ADG

In the previous test, I had the PDB opened for a short time when I opened the CDB because of the saved state. If you want to avoid this, the proper way is to ensure that the apply is off when you open it so that you do not have, at the same time, a PDB opened and real time apply.

I’m back to the state where the standby is in mount and apply on:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL MOUNTED-STANDBY MANAGED REAL TIME APPLY

I stop the apply in the standby:

DGMGRL> edit database "ORCL_02" set state=apply-off;
Succeeded.

I check that the primary sees that apply is off:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL MOUNTED-STANDBY IDLE

I open the standby CDB read-only and ensures that all PDB are closed:

SQL> alter database open;
Database altered.
 
SQL> alter pluggable database all close;
Pluggable database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

The primary sees the open state:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY IDLE

I can now restart the apply:

DGMGRL> edit database "ORCL_02" set state=apply-on;
Succeeded.

Then the primary detects the managed recovery

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED

Then once the gap is resolved, we are in real time apply:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
  DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED REAL TIME APPLY

But there is no mention of ‘WITH QUERY’ thanks to the closed PDBs, which means that no ADG usage is recorded. In summary, 18c has added two possible values for RECOVERY_MODE: ‘MANAGED WITH QUERY’ and ‘MANAGED REAL TIME APPLY WITH QUERY’, and those ones, in multitenant, are for standby CDB opened read-only when at least one user PDB is also opened read-only.

For DBaaS, you probably run with Grid Infrastructure (because Oracle Restart do not allow databases on ACFS). For single-tenant, this new feature can solve the problem of activating ADG option by mistake, at least if we have good control of PDB states depending on the role.

 

Cet article 18c: No Active Data Guard required (and detected) when only CDB$ROOT and PDB$SEED are opened in read-only est apparu en premier sur Blog dbi services.

Oracle Linux 7 for ARM is now Generally Available

Wim Coekaerts - Sun, 2018-06-24 13:01

We released Oracle Linux 7 for ARM a few days ago. General Availability. We have been making previews available for a few months now but the time has come to put support behind it and make clear to customers and partners that this is a real product, not just a preview.

A few specific things:

- This is a 64-bit version only. We do not intend to support ILP 32. Our focus is on making sure we can provide a high quality server product to run now and in the future, serious applications and I think it's fair to say that ILP32 would just be more work with little added value to reach that goal. So OL7 is a very clean 64-bit only distribution.

- Oracle Linux 7 update 5 is the base level of OL7 for ARM. We have done a lot of work to ensure that it's very close to x86(x64). Our ARM packages are built off of the same source RPMs as the x86 version and that allows us to have as little, if any deviation between the 2 architectures. We want it to be as seamless as possible to go from one architecture to the other. We will make the same errata available across the architectures and where it makes sense, have the same repo names and structure.

- Our ARM port uses UEK5 only. The other distribution kernels are still a bit in flux on ARM because their x86 kernel is a bit older and ARM is still undergoing a decent amount of churn. For us, with the UEK model, it was a lot easier to align the 2 architectures and it worked out perfectly fine timing wise. UEK5 is 4.14.x mainline Linux based. So we have the same kernel, same source-base on x86 as well as arm. That means dtrace is there, ksplice support is there, etc...  Errata for one architecture, when relevant on the other will be released at same time. Again - streamline it as much as possible so that our customers and partners that have both x86 and arm architectures won't really notice any difference at all. 

Also, UEK5 on x86 is built with the default gcc version that comes with OL7 (gcc 4.8). However on ARM we decided to build with gcc7.3. and... UEK5 ARM is built with 64k page size.

- As with x86, Oracle Linux for ARM is freely downloadable. We have installable ISO images. Errata will also be freely available. It can be used in test, dev or production, we have no restrictions on that. If you want support, you get a support subscription, just like on x86, otherwise you can use it as much as you want. No auth keys, no private repos. Just simple public https://yum.oracle.com for errata. Of course the source code as well.

- Since a lot of enhancements have gone into the toolchain (compiler, glibc, ...) we decided to provide a gcc7.3 environment with OL7/ARM. The Software Collection 3.0 repo on ARM contains the 'Oracle ARM toolset'. Which is basically gcc 7.3 and related items. The toolchain team is doing a lot of work with ARM optimizations. (as is the kernel team for that matter).

- Hardware partners : Right now we have validated and work closely with our partners Ampere Computing and Cavium. The majority of our testing and validation happens on these platforms and chips.

- ISVs. In order to build out a very viable server/cloud platform for ARM. We (as everyone else) need our ISV partner ecosystem to follow us. This is one reason we decided to go GA. We want to ensure we show that we are serious about this platform and that helps partners move forward as well. Internally we have already worked with the MySQL team to provide MySQL 8.0 for ARM. We are also doing work on Java optimizations and looking at other products.

- Cloud-'native'... docker for Oracle Linux/ARM is there - we have Oracle Linux images on docker hub (in case you didn't know...). You will see k8s show up etc..

- Basics/beginnings of EPEL. A lot of our users on x86 use a lot of EPEL packages. As many of you already know, we started rebuilding (not modifying) the EPEL packages so that they are (1) signed by us (2) come from the same repo source as the base OL (easier to have a single download location) (3) allows us to easily make all our RPMs available for Oracle Cloud users on the 'internal' cloud network. We are going to expand this to ARM as well so that we slowly increase the ARM/EPEL repo. This will take some time.

- We have a Raspberry Pi 3B and 3B+ image that is still pre-GA with UEK5 and grub. Expect to see an update to the GA code-base in the near future. RPI3 is more of a 'fun' and easy way to get to play with OL7/ARM, we don't see it (sorry) as a production target.

Go download it, play with it, have fun...

and thanks to my team at Oracle for making this happen and also a shout out to our partners for their contributions (Ampere Computing folks! and Cavium folks!)

 

 

 

 

Cursor_sharing force

Jonathan Lewis - Sat, 2018-06-23 06:05

Prompted by a recent ODC (OTN) question I’ve just written up an example of one case where setting the cursor_sharing parameter to force doesn’t work as you might expect. It’s a specific example of what I believe is a theme that can appear in several different circumstances: if your SQL mixes “genuine” bind variable with literals then the literals may not be substituted.

Here’s a simple data set to start with:


rem
rem     Script:         cursor_sharing_limit.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem     Purpose:
rem
rem     Last tested
rem             18.1.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create table t1
as
select
        rownum            n1,
        rownum            n2,
        lpad(rownum,10)   small_vc,
        rpad('x',100,'x') padding
from dual
connect by
        level <= 1e4 -- > comment to avoid WordPress format issue
;

I haven’t bothered to gather stats – it’s not needed in trivial cases of CTAS.

Here’s one way to execute SQL that doesn’t get literal substitution when (apparently) it should:


alter system flush shared_pool;
alter session set cursor_sharing=force;

declare
        m_ct number;
        m_n1 number := 20;
begin
        execute immediate
                'select /*+ trace this */ count(*) from t1 where n2 = 15 and n1 = :b1'
                into m_ct using m_n1;
        dbms_output.put_line(m_ct);

        execute immediate
                'select /*+ trace this too */ count(*) from t1 where n1 = 15 and n2 = 15'
                into m_ct;
        dbms_output.put_line(m_ct);
end;
/

alter session set cursor_sharing=exact;

I’ve included a “hint” that allows me to find the SQL statements in v$sql very easily – and here they are, preceded by the query I used to find them:


select  sql_id, parse_calls, executions, rows_processed, sql_text 
from    v$sql
where   sql_text like 'select%trace this%'
and     sql_text not like '%v$sql%'
;


SQL_ID        PARSE_CALLS EXECUTIONS ROWS_PROCESSED
------------- ----------- ---------- --------------
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
cbu4s78h5pfj5           1          1              1
select /*+ trace this too */ count(*) from t1 where n1 = :"SYS_B_0" and n2 = :"SYS_B_1"

cru67sufptx8x           1          1              1
select /*+ trace this */ count(*) from t1 where n2 = 15 and n1 = :b1

Notice how one statement – which used only literals – underwent bind variable transformation but the other – with its mix of literals and bind variables – didn’t. On the other hand, if I execute the mixed format statement from a basic SQL*Plus session then pull the plan from memory, this is what I get:


SQL> select /*+ SQL*Plus session */ count(*) from t1 where n2 = 15 and n1 = :b1;

  COUNT(*)
----------
         0

1 row selected.

SQL> select  sql_id, parse_calls, executions, rows_processed, sql_text
  2  from    v$sql
  3  where   sql_text like 'select%Plus session%'
  4  and     sql_text not like '%v$sql%'
  5  ;

SQL_ID        PARSE_CALLS EXECUTIONS ROWS_PROCESSED
------------- ----------- ---------- --------------
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
gq2qy2a9yuta7           1          1              1
select /*+ SQL*Plus session */ count(*) from t1 where n2 = :"SYS_B_0" and n1 = :b1

When trying to take advantage of cursor_sharing = force, you have to think about how the SQL gets to the database engine – is it delivered directly, or is it sent through some layer of code that means the raw SQL statement is protected in some way from the substitution code.

Footnote:

I happen to have used an example that puts the bind variable into the where clause; you will see the same effect even if the bind variables are in the select list – for example if you’ve selected something like to_char(date_col, :formatting_variable).

Updates about processor vulnerabilities CVE-2018-3640 (“Spectre v3a”) and CVE-2018-3639 ...

Oracle Security Team - Fri, 2018-06-22 16:06

Two new processor vulnerabilities were publicly disclosed on  May 21, 2018.  They are vulnerabilities CVE-2018-3640 ( “Spectre v3a” or “Rogue System Register Read”) and CVE-2018-3639 (“Spectre v4” or “Speculative Store Buffer Bypass”).  Both vulnerabilities have received a CVSS Base Score of 4.3. 

Successful exploitation of vulnerability CVE-2018-3639 requires local access to the targeted system.  Mitigating this vulnerability on affected systems will require both software and microcode updates. 

Successful exploitation of vulnerability CVE-2018-3640 also requires local access to the targeted system.  Mitigating this vulnerability on affected Intel processors is solely performed by applying updated processor-specific microcode.

Working with the industry, Oracle has just released the required software updates for Oracle Linux and Oracle VM along with the microcode recently released by Intel for certain x86 platforms.  Oracle will continue to release new microcode updates and firmware patches as production microcode becomes available from Intel. 

As for previous versions of the Spectre and Meltdown vulnerabilities (see MOS Note ID 2347948.1), Oracle will publish a list of products affected by CVE-2018-3639 and CVE-2018-along with other technical information on My Oracle Support (MOS Note ID 2399123.1).  In addition, the Oracle Cloud teams will be working to identify and apply necessary updates if warranted, as they become available from Oracle and third-party suppliers, in accordance with applicable change management processes

Introduction to databases for {Power.Coders} with MySQL

Yann Neuhaus - Fri, 2018-06-22 15:04
    PC_IMG_5528
    This week I took some days off to do something related to my job but a bit different. I’ve given a course on Databases and SQL. But not for my usual customers. And not with the database I know the best. So, it is still in a domain that I know, but out of my comfort zone. And this is something that we should do more often because it gives a little apprehension and a big satisfaction.

    The little apprehension is because there were a lot of unknown parameters for me. I taught to students from Power.Coders, a coding academy for refugees. 18 young people with a very different background. Some already knew how to code. Some did some front-end stuff and website design in the past but had no idea about what is a server. Some others are doing all that for the first time, and have to learn what is a program. But there’s one thing that is common to everybody here: all are motivated to learn, understand, and acquire the knowledge and experience to start a career in IT. This is the good energy that makes everything possible.

    The big satisfaction is because, with everybody doing their best, things works and everyone gains confidence. It is out of the comfort zone that you can get your best, and that is for the students as well as the teacher and coaches. Because I wanted the course to stay consistent with what they learned in the curriculum, I did the database examples and exercises on MySQL and from PHP code. I never did that so I had to study as well. The big advantage I have, from experience, is that I know where to search on the internet. One of the students told me “don’t tell me to google for it, that’s like swimming in the ocean!”. When you are not used to it ‘googling’ for solutions is not easy. Experienced people do not always consider that and they answer in forums with a rude “RTFM” or “LMGTFY”. I’ve never felt obliged to answer on forums but when I do it, it is not to argue about the OP and his question, and whether he did his own research before. If I choose to answer, then my goal is to explain as clearly as possible. And I do not fear to repeat myself because the more I explain and the better understanding I have about what I explain. I remember my first answers on the dba-village forum. Some questions were recurrent. And each time I tried to answer with a shorter and clearer explanation.

    Google Slides

    PC_IMG_5523I’ve prepared slides and exercises for 3 days and here I share the content (however, there were a lot of whiteboard explanations so the slides may not be sufficient). I did the pages were with Google Sites and the presentation with Google Slides. Both of them were, again, new things for me, out of my .ppt comfort zone. It went very well. For correct presenter experience, I installed the “Google Slides Auto Resize Speaker Notes” Google Chrome extension. One things annoys me with Google Slides: readers cannot copy/paste from the slides. Please, comment here if you have a solution. My workaround was to copy the code to the presenter’s notes and tell the students to open them (with ‘s’ key) and copy from there. But I don’t like to duplicate the code.

    – Day 1 on Data Structures:
    Data Modeling, YAML, XML, JSON, CSV and introduction to relation tables.

    Exercise: load some OpenFlight data into MySQL

    – Day 2 on Introduction to Databases:
    RDBMS, create and load tables, normalization

    Exercise: a PHP page to query a simple table

    – Day 3 on Introduction to SQL:
    SELECT, INSERT, DELETE, UPDATE, ACID and transactions

    Exercise: a PHP page to list Flights from multi-creteria form

    In addition to the course, I also did some coaching for their PHP exercises. I discovered this language (which I do not like – meaningless error messages, improbable implicit conversion,…). But at least we were able to make some concepts more clear: what is a web server, sessions, cookies, access to the database… And the method is important. How to approach a code that doesn’t work, where nothing is displayed: change the connection parameters to wrong ones to see if we go to this part of code, add explicitly a syntax error in the SQL statement to see if errors are correctly trapped, echo some variables to see if they are set. Before learning magic IDEs, we must put the basics that will help everywhere. The main message is: you are never stuck with an error. There is always a possibility to trace more. And when you have all details, you can focus your google search better.

    Thanks

    Big thanks to SQLFiddle where it is easy to do some SQL without installing anything. However, being 20 people on a small Wi-Fi, using local resources is preferable. And we installed MAMP (see here how I discovered it and had to fix a bug at the same time). Big thanks to Chris Saxon ‘Database for Developers’ videos which will help the students to review all the concepts in an entertaining way. Thanks to w3schools for the easy learning content.

    Oh, and thanks to facebook sponsoring-privacy-intrusive-algorithms! Because this is how I heard about PowerCoders. For the first time of my life, I clicked on a sponsored link on a social media. This was for the WeMakeIt crowdfunding project for this powercoders curriculum in Lausanne. I’ve read about the project. I watched the video and that’s how I wanted to participate in this project. You should watch this Christian Hirsig TED talk as well. At a time where everybody is talking about autonomous self-driven cars, his accomplishment was to move from completely powerless to be back in the driver’s seat…

    IMG_5531
    And of course thanks to Powercoders organizers, students, teachers, coaches, mentors and the companies who propose internships to complete the curriculum (I was happy, and proud of my employer, when dbi-services was in immediately).
    Teaching to motivated people who want to learn as much as possible is a great experience, and not all days are like this in the professional life. And explaining topics that are aside of my comfort zone is lot of work, but also a rewarding experience. In this world where technology goes faster and faster, showing the approach and the method to adapt to new topics gives a lot of self-confidence.

     

    Cet article Introduction to databases for {Power.Coders} with MySQL est apparu en premier sur Blog dbi services.

Arrgs. My Bot Doesn't Understand Me! Why Intent Resolutions Sometimes Appear to Be Misbehaving

OTN TechBlog - Fri, 2018-06-22 10:17

Article by Grant Ronald, June 2018

One of the most common questions that gets asked when someone starts building a real bot is “Why am I getting strange intent resolutions”. For example, someone tests the bot with random key presses like “slkejfhlskjefhksljefh” and finds an 80% resolution for “CheckMyBalance”. The first reaction is to blame the intent resolution within the product. However, the reality is that you’ve not trained it to know any better. This short article gives a high level conceptual explanation of how model do and don’t work.

READ THE FULL ARTICLE

Related Content

TechExchange - First Step in Training Your Bot

A Practical Guide to Building Multi-Language Chatbots with the Oracle Bot Platform

OTN TechBlog - Fri, 2018-06-22 09:05

Article by Frank Nimphius, Marcelo Jabali - June 2018

Chatbot support for multiple languages is a worldwide requirement. Almost every country has the need for supporting foreign languages, be it to support immigrants, refugees, tourists, or even employees crossing borders on a daily basis for their jobs.

According to the Linguistic Society of America1, as of 2009, 6,909 distinct languages were classified, a number that since then has been grown. Although no bot needs to support all languages, you can tell that for developers building multi-language bots, understanding natural language in multiple languages is a challenge, especially if the developer does not speak all of the languages he or she needs to implement support for.

This article explores Oracle's approach to multi language support in chatbots. It explains the tooling and practices for you to use and follow to build bots that understand and "speak" foreign languages.

Read the full article.

 

Related Content

TechExchange: A Simple Guide and Solution to Using Resource Bundles in Custom Components 

TechExchange - Custom Component Development in OMCe – Getting Up and Running Immediately

TechExchange - First Step in Training Your Bot

converting TIMESTAMP(6) to TIMESTAMP(0)

Tom Kyte - Fri, 2018-06-22 08:26
Currently I have a column with datatype TIMESTAMP(6) but now i have a requirement to change it to TIMESTAMP(0). Because we cannot decrease the precision, ORA-30082: datetime/interval column to be modified must be empty to decrease fractional sec...
Categories: DBA Blogs

Mail Restrictions using UTL_SMTP

Tom Kyte - Fri, 2018-06-22 08:26
Hi Tom, I have a requirement to send email to particular domain mail id?s. But My Mail server is global mail server we can send mail to any mail ids. Is there any options in Oracle to restrict the mail send as global. For example: My mail host is...
Categories: DBA Blogs

Expanded Oracle Accelerator Gives Texas Startups a Boost

Oracle Press Releases - Fri, 2018-06-22 06:00
Press Release
Expanded Oracle Accelerator Gives Texas Startups a Boost New Austin-based program offers enterprise customer network, mentoring, resources and cloud technology, as well as Capital Factory collaboration, to help startups grow and compete globally

Redwood Shores, Calif.—Jun 22, 2018

Oracle today announced the opening of the Oracle Startup Cloud Accelerator in Austin, Texas, the global program’s first U.S. location and part of the Oracle Global Startup Ecosystem. The new accelerator provides statewide startups with access to a network of more than 430,000 Oracle customers, technical and business mentors, state-of-the-art technology, co-working space at Capital Factory, introductions to partners, talent, and investors, and free Oracle Cloud credits. In addition to local expertise, the program offers an ever-expanding global community of startup peers and program alumni.

Austin Startup Cloud Accelerator

The Oracle Startup Cloud Accelerator, which is open to early-stage technology and technology-enabled startups, is accepting applications through August 7. Startups will begin the six-month program in early September.

Oracle’s Austin Startup Cloud Accelerator is run by JD Weinstein. Weinstein, a former Principal at WPP Ventures and previously a Venture Associate at Capital Factory, brings a deep understanding of the local startup ecosystem and scaling startups through enterprise relationships.

“Austin and the State of Texas are thriving centers of innovation, and we are proud to dive in and support the startup community with cutting edge resources, including enterprise customer channels, hands-on experience with Oracle technical and product teams, mentoring from top business leaders, executives, and investors, as well as connections to thousands of entrepreneurs and corporate partners through our collaboration with Capital Factory,” said JD Weinstein, head of Oracle Startup Ecosystem in Austin.

Capital Factory & Austin Network

Oracle is working with Capital Factory to provide connections to the organization’s expansive network of local entrepreneurs, prominent CEOs, venture capitalists, corporations, and government officials. Startups in Oracle’s accelerator will also receive access to Capital Factory’s Mentor Network, free co-working space, and will benefit from the reach of the organization’s social media and event communities. Members of Oracle’s broader Global Startup Ecosystem will also benefit from the relationship with Capital Factory.

“We are excited that Oracle has invested in Austin as the first U.S. location of its global accelerator,” said Joshua Baer, founder and executive director, Capital Factory. “The combination of our mentor network and Oracle’s cloud platform and customer connections will provide startups a major advantage in growing their business.”

The Startup Cloud Accelerator is supported by Oracle’s rapidly growing presence in Austin. The company recently opened a state-of-the-art campus on Lady Bird Lake. Oracle’s expanding employee base and the new facility will provide additional resources and support for startups in the accelerator program.

Commitment to Global Startups

“Rooted in its own entrepreneurial beginnings, Oracle has long believed that startups are at the heart of innovation,” said Reggie Bradford, senior vice president, Oracle Startup Ecosystem and Accelerator. “The Austin accelerator is key to our mission of creating a global ecosystem of co-development and co-innovation where everyone—the startups, customers, and Oracle—can win.”

The Oracle Global Startup Ecosystem offers residential and nonresidential startup programs, plus a burgeoning higher education program, that power cloud-based technology innovation. The residential Oracle Startup Cloud Accelerator has locations in Austin, Bangalore, Bristol, Delhi–NCR, Mumbai, Paris, São Paulo, Singapore and Tel Aviv. Oracle Scaleup Ecosystem is the nonresidential, virtual-style program available for growing companies around the globe. Interested startups, venture capital firms and other organizations, regardless of their location, can apply for Oracle Scaleup Ecosystem here.

Contact Info
Julia Allyn
Oracle Corporate Communications
+1.650.607.1338
julia.allyn@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Julia Allyn

  • +1.650.607.1338

MySQL 8.0 – Roles are finally there

Yann Neuhaus - Fri, 2018-06-22 05:29

Roles have been existing on many RDBMS for a long time by now. Starting from version 8.0, this functionality is finally there for MySQL.
The most important advantage is to define only once a role that includes a “set of permissions”, then assign it to each user, avoiding wasting time declaring them individually.

In MySQL, a role can be created like a user, but without the “identified by” clause and without login:

mysqld2-(root@localhost) [(none)]> CREATE ROLE 'r_sakila_read';
Query OK, 0 rows affected (0.03 sec)
mysqld2-(root@localhost) [(none)]> select user,host,authentication_string from mysql.user;
+------------------+-----------+------------------------------------------------------------------------+
| user             | host      | authentication_string                                                  |
+------------------+-----------+------------------------------------------------------------------------+
| r_sakila_read    | %         |                                                                        |
| multi_admin      | localhost | $A$005$E?D/>efE+Rt12omzr.78VnfR3kxj8KLG.aP84gdPMxW7A/7uG3D80B          |
| mysql.infoschema | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE                              |
| mysql.session    | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE                              |
| mysql.sys        | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE                              |
| root             | localhost | {u]E/m)qyn3YRk2u.JKdxj9/6Krd8uqNtHRzKA38cG5qyC3ts5                     |
+------------------+-----------+------------------------------------------------------------------------+

After that you can grant some privileges to this role, as you usually do for users:

mysqld2-(root@localhost) [(none)]> grant select on sakila.* to 'r_sakila_read';
Query OK, 0 rows affected (0.10 sec)
mysqld2-(root@localhost) [(none)]> show grants for r_sakila_read;
+---------------------------------------------------+
| Grants for r_sakila_read@%                        |
+---------------------------------------------------+
| GRANT USAGE ON *.* TO `r_sakila_read`@`%`         |
| GRANT SELECT ON `sakila`.* TO `r_sakila_read`@`%` |
+---------------------------------------------------+
2 rows in set (0.00 sec)

Now you can create your user:

mysqld2-(root@localhost) [(none)]> create user 'u_sakila1'@localhost identified by 'qwepoi123098';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

And yes, check your password policy because, starting from version 8.0, the new validate_password component replaces the old validate_password plugin and is now enabled by default and you don’t have to install it anymore.

mysqld2-(root@localhost) [(none)]> show variables like 'validate_password_%';
+--------------------------------------+--------+
| Variable_name                        | Value  |
+--------------------------------------+--------+
| validate_password_check_user_name    | ON     |
| validate_password_dictionary_file    |        |
| validate_password_length             | 8      |
| validate_password_mixed_case_count   | 1      |
| validate_password_number_count       | 1      |
| validate_password_policy             | MEDIUM |
| validate_password_special_char_count | 1      |
+--------------------------------------+--------+
7 rows in set (0.01 sec)
mysqld2-(root@localhost) [(none)]> create user 'u_sakila1'@localhost identified by 'QwePoi123098!';
Query OK, 0 rows affected (0.08 sec)

In my example I have by default a MEDIUM level for checking passwords which means “Length; numeric, lowercase/uppercase, and special characters” (I will better talk about validate_password component in an upcoming blog). Let’s go back to roles…

Grant the created role to your created user (as you usually grant a privilege):

mysqld2-(root@localhost) [(none)]> grant 'r_sakila_read' to 'u_sakila1'@localhost;
Query OK, 0 rows affected (0.01 sec)
mysqld2-(root@localhost) [(none)]> flush privileges;
Query OK, 0 rows affected (0.02 sec)

At this point if you check privileges of your user through a USING clause, you will get information about the granted roles and also privileges associated with each role:

mysqld2-(root@localhost) [(none)]> show grants for 'u_sakila1'@localhost using 'r_sakila_read';
+-------------------------------------------------------+
| Grants for u_sakila1@localhost                        |
+-------------------------------------------------------+
| GRANT USAGE ON *.* TO `u_sakila1`@`localhost`         |
| GRANT SELECT ON `sakila`.* TO `u_sakila1`@`localhost` |
| GRANT `r_sakila_read`@`%` TO `u_sakila1`@`localhost`  |
+-------------------------------------------------------+
3 rows in set (0.00 sec)

Now if you try to connect with your user and do a select of data on the database on which you have a read privilege, you will discover that something is still missing:

mysqld2-(root@localhost) [(none)]>  system mysql -u u_sakila1 -p
mysqld2-(u_sakila1@localhost) [(none)]> use sakila;
ERROR 1044 (42000): Access denied for user 'u_sakila1'@'localhost' to database 'sakila'
mysqld2-(u_sakila1@localhost) [(none)]> SELECT CURRENT_ROLE();
+----------------+
| CURRENT_ROLE() |
+----------------+
| NONE           |
+----------------+
1 row in set (0.00 sec)

Why?
Because you have to define which roles will be active when the user authenticates. You you can do that by adding the “DEFAULT ROLE role” during the user creation (starting from version 8.0.3), or even later through the following statement:

mysqld2-(root@localhost) [(none)]> set default role r_sakila_read to 'u_sakila1'@localhost;
Query OK, 0 rows affected (0.08 sec)

Otherwise, starting from version 8.0.2, you can directly let the server activate by default all roles granted to each user, setting the activate_all_roles_on_login variable to ON:

mysqld2-(root@localhost) [(none)]> show variables like '%activate%';
+-----------------------------+-------+
| Variable_name               | Value |
+-----------------------------+-------+
| activate_all_roles_on_login | OFF   |
+-----------------------------+-------+
1 row in set (0.00 sec)
mysqld2-(root@localhost) [(none)]> set global activate_all_roles_on_login=ON;
Query OK, 0 rows affected (0.00 sec)
mysqld2-(root@localhost) [(none)]> show variables like '%activate%';
+-----------------------------+-------+
| Variable_name               | Value |
+-----------------------------+-------+
| activate_all_roles_on_login | ON    |
+-----------------------------+-------+
1 row in set (0.01 sec)

So if you check again, all works correctly:

mysqld2-(root@localhost) [mysql]> select * from role_edges;
+-----------+----------------+-----------+-----------+-------------------+
| FROM_HOST | FROM_USER      | TO_HOST   | TO_USER   | WITH_ADMIN_OPTION |
+-----------+----------------+-----------+-----------+-------------------+
| %         | r_sakila_read  | localhost | u_sakila1 | N                 |
+-----------+----------------+-----------+-----------+-------------------+
4 rows in set (0.00 sec)
mysqld2-(root@localhost) [(none)]>  system mysql -u u_sakila1 -p
mysqld2-(u_sakila1@localhost) [(none)]> use sakila
mysqld2-(u_sakila1@localhost) [sakila]> connect
Connection id:    29
Current database: sakila
mysqld2-(u_sakila1@localhost) [sakila]> select CURRENT_ROLE();
+---------------------+
| CURRENT_ROLE()      |
+---------------------+
| `r_sakila_read`@`%` |
+---------------------+
1 row in set (0.00 sec)

Enjoy your roles now! ;)

 

Cet article MySQL 8.0 – Roles are finally there est apparu en premier sur Blog dbi services.

What’s new in EDB EFM 3.1?

Yann Neuhaus - Fri, 2018-06-22 04:24

Beginning of this month EnterpriseDB announced a new version of its Failover Manager. Version 2.1 introduced controlled switchover operations, version 3.0 brought support for PostgreSQL 10 and now: What’s new in version 3.1? It might seem this is just a bugfix release but there is more and especially one enhancement I’ve waited for a long time.

As you might remember: When you stopped EFM (before version 3.1) the nodes.in file was always empty again. What we usually did is to create a backup of that file so we just could copy it back but this is somehow annoying. The current version comes with a new property in the efm.properties file to handle that better:

# When set to true, EFM will not rewrite the .nodes file whenever new nodes
# join or leave the cluster. This can help starting a cluster in the cases
# where it is expected for member addresses to be mostly static, and combined
# with 'auto.allow.hosts' makes startup easier when learning failover manager.
stable.nodes.file=true

When set to “true” the file will not be touched when you stop/restart EFM on a node:

root@:/etc/edb/efm/ [] cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.60:9998 192.168.22.61:9998 
root@:/etc/edb/efm/ [] systemctl stop efm-3.1.service
root@:/etc/edb/efm/ [] cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.60:9998 192.168.22.61:9998 
root@:/etc/edb/efm/ [] systemctl start efm-3.1.service
root@:/etc/edb/efm/ [] cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.60:9998 192.168.22.61:9998 

A small, but really nice improvement. At least with our deployments the amount of cluster nodes is rather static so this helps a lot. While this is a new property another property is gone:

root@:/etc/edb/efm/ [] grep efm.license efm.properties

This means you do not anymore need a license key to test EFM for more than 60 days, which is great as well. Another small improvement is that you now can see on which node the VIP is currently running on:

root@:/etc/edb/efm/ [] /usr/edb/efm/bin/efm cluster-status efm
Cluster Status: efm

	Agent Type  Address              Agent  DB       VIP
	-----------------------------------------------------------------------
	Master      192.168.22.60        UP     UP       192.168.22.63*
	Standby     192.168.22.61        UP     UP       192.168.22.63

Allowed node host list:
	192.168.22.60 192.168.22.61

Membership coordinator: 192.168.22.61

Standby priority host list:
	192.168.22.61

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.60        0/40006F0        
	Standby     192.168.22.61        0/40006F0        

	Standby database(s) in sync with master. It is safe to promote.

When it comes to the VIP there is another enhancement which is controlled by new property:

root@:/etc/edb/efm/ [] grep virtualIp.single efm.properties | tail -1
virtualIp.single=true

When this is set to “true” EFM will use the same address for the VIP after a failover on the new master. This was the default behavior before EFM 3.1. When you want to use another VIP on a new master you can now do that be switching that to false and provide a different VIP in the properties file on each node.

That’s the important ones for me. The full list is in the documentation.

 

Cet article What’s new in EDB EFM 3.1? est apparu en premier sur Blog dbi services.

utl_dbws causes ORA-29532 and bad_record_mac

Yann Neuhaus - Fri, 2018-06-22 03:27

After installing OJVM patch set update APR-2017 on a 11.2.0.4 database with PSU APR-2017 installed, first call of utl_dbws package was successful, but after a while utl_dbws calls failed always with ORA-29532 and bad_record_mac. All Java objects remained valid.
Also after trying procedures described in MOS document 2314363.1 utl_dbws worked first time, after that it always failed.
We could observe that after a while after restarting database m000 process ran and tried to recompile Java classes. When waiting till m000 finished utl_dbws always succeeded.
The m000 process start was caused by parameter setting JAVA_JIT_ENABLED to TRUE.

When setting JAVA_JIT_ENABLED to false, utl_dbws always worked fine. Probably locking of java classes by application prevented to recompile them properly.

 

Cet article utl_dbws causes ORA-29532 and bad_record_mac est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator