Feed aggregator

AWS re:invent 2018 – Day 1

Yann Neuhaus - Tue, 2018-11-27 10:20

Yesterday was my first day at AWS re:Invent conference. The venue is quite impressive, the conference is split between 6 hotels where you can attend different types of sessions including chalk talk, keynotes, hands-on labs or workshop. For my first day, I stayed in the same area in The Venetian to make it easy.

Invent

The walking distance is quite big between some places so it requires to carefully plan the day to be able to see what you want to see. Hopefully there is a shuttle service and I’ll move a bit more between hotels tomorrow. You also need to reserve your seat and be here in advance to be sure to enter the room.

In my own example, I wanted to attend a chalk talk about Oracle Licensing in the Cloud to start the week. As I was not able to reserve a seat I had to wait on the walk up line. The session was full, Oracle still interests lots of people and licensing is still a concern besides performance for lots of customers when they start planning to move to public cloud.

I’m working with AWS services for a bit more than 1 year at a customer but there are still a lot to learn and understand about AWS, that’s why I also attended to an Introductory session about VPC (Virtual Private Cloud) to better understand the network options when going to AWS. To make it simple, a VPC allows to to have a private network configured as you wish inside AWS. You have the control of the IP range you would like to use and you can configure the routing tables and so on.

I also tried to attend a workshop about running Oracle on the Amazon RDS, the AWS managed database service and especially how to migrate them from Oracle to the Amazon Aurora database using PostgreSQL compatibility. The goal was to use 2 AWS products to run the migration: AWS Schema Convertion Tool and AWS Database Migration Service. Unfortunately some issues with the WiFi constantly changing the IP and a limitation on my brand new AWS account that required additional checks from Amazon prevented me from going to the end of the workshop. But I got some credits to try it by myself a bit later so I’ll most probably try the Schema Conversion Tool.

Some DBA may worry about the managed database services or announces from Oracle about autonomous database but I agree with the slides below from AWS speaker during the workshop. I personally think that DBA won’t disappear. Data itself and applications will still be around for quite long time and the job may evolve and we will spend more time on application/data side than before.

DBA role in the Cloud

Today is another day, let’s forget a bit about the DBA part and try to see more about DevOps…

Cet article AWS re:invent 2018 – Day 1 est apparu en premier sur Blog dbi services.

Compare all table content from one schema to another

Tom Kyte - Tue, 2018-11-27 07:46
Hi, I need to compare full schema table contents with another schema. Both will have same tables, with exact the same DDL, just some records will be different in few columns. Is there any way to execute a script or any way to do it in SQL Deve...
Categories: DBA Blogs

parsing recusirvely "#" from strings without PL/SQL procedure

Tom Kyte - Tue, 2018-11-27 07:46
Dear Tom, First of all thanks for the site, the tips... always useful... I faced recently an sql issue and i designed a beginning of answears but i think it should be optimized. I have a table containing posts from a social network platforms. Thos...
Categories: DBA Blogs

Sending mail using utl_mail

Tom Kyte - Tue, 2018-11-27 07:46
Dear Team, I tried to install UTL_MAIL using the below command sqlplus sys/<pwd> SQL> @$ORACLE_HOME/rdbms/admin/utlmail.sql SQL> @$ORACLE_HOME/rdbms/admin/prvtmail.plb it runs successfully and created the package and synonym but when i tri...
Categories: DBA Blogs

LOB compression

Tom Kyte - Tue, 2018-11-27 07:46
Hello, I have a table with BLOBs and they are actually character logs generated by fax transmission software. They are about 4k-16k big and pretty redundant, they compress with zlib to just about 1k-2k. And I want to store them compressed in DB. N...
Categories: DBA Blogs

SQL Server 2019 CTP 2.1 – A replacement of DBCC PAGE command?

Yann Neuhaus - Tue, 2018-11-27 07:08

Did you ever use the famous DBCC PAGE command? Folks who are interested in digging further to the SQL Server storage already use it for a while. We also use it during our SQL Server performance workshop by the way. But the usage of such command may sometimes go beyond and it may be used for some troubleshooting scenarios. For instance, last week, I had to investigate a locking contention scenario where I had to figure out which objects were involved and with their related pages (resource type) as the only way to identify them. SQL Server 2019 provides the sys.dm_db_page_info system function that can be useful in this kind of scenario.

blog 148 - 0 - banner

To simulate locks let’s start updating some rows in the dbo.bigTransactionHistory as follows:

USE AdventureWorks_dbi;
GO

BEGIN TRAN;

UPDATE TOP (1000) dbo.bigTransactionHistory
SET Quantity = Quantity + 1

 

Now let’s take a look at the sys.dm_tran_locks to get a picture of locks held by the above query:

SELECT 
	resource_type,
	COUNT(*) AS nb_locks
FROM 
	sys.dm_tran_locks AS tl
WHERE 
	tl.request_session_id = 52
GROUP BY
	resource_type

 

blog 148 - 1 - query locks

Referring to my customer scenario, let’s say I wanted to investigate locks and objects involved. For the simplicity of the demo I focused only the sys.dm_tran_locks DMV but generally speaking you would probably add other ones as sys.dm_exec_requests, sys.dm_exec_sessions etc …

SELECT 
	tl.resource_database_id,
	SUBSTRING(tl.resource_description, 0, CHARINDEX(':', tl.resource_description)) AS file_id,
	SUBSTRING(tl.resource_description, CHARINDEX(':', tl.resource_description) + 1, LEN(tl.resource_description)) AS page_id
FROM 
	sys.dm_tran_locks AS tl
WHERE 
	tl.request_session_id = 52
	AND tl.resource_type = 'PAGE'

 

blog 148 - 2 - locks and pages

The sys.dm_tran_locks DMV contains the resource_description column that provides contextual information about the resource locked by my query. Therefore the resource_description value column will inform about [file_id:page_id] when resource_type is PAGE.

SQL Server 2019 will probably lead the DBCC PAGE command to return to the stone age for some tasks but let’s start with this old command as follows:

DBCC PAGE (5, 1, 403636, 3) WITH TABLERESULTS;

 

blog 148 - 3 - dbcc page

The DBCC PAGE did the job and provides and output that includes the page header section where the Metadata: ObjectId is stored. We may then use it with OBJECT_NAME() function to get the corresponding table name.

SELECT OBJECT_NAME(695673526)

 

blog 148 - 4 - dbcc page - object_name

But let’s say that using this command may be slightly controversial because this is always an undocumented command so far and no need to explain here how it can be dangerous to use it in production. Honestly, I never encountered situations where DBCC PAGE was an issue but I may not provide a full guarantee and it is obviously at your own risk. In addition, applying DBCC PAGE for all rows returned from my previous query can be a little bit tricky and this is where the new sys.dm_db_page_info comes into play.

;WITH tran_locks
AS
(
	SELECT 
		tl.resource_database_id,
		SUBSTRING(tl.resource_description, 0, CHARINDEX(':', tl.resource_description)) AS file_id,
		SUBSTRING(tl.resource_description, CHARINDEX(':', tl.resource_description) + 1, LEN(tl.resource_description)) AS page_id
	FROM 
		sys.dm_tran_locks AS tl
	WHERE 
		tl.request_session_id = 52
		AND tl.resource_type = 'PAGE'
)
SELECT 
	OBJECT_NAME(page_info.object_id) AS table_name,
	page_info.*
FROM 
	tran_locks AS t
CROSS APPLY 
	sys.dm_db_page_info(t.resource_database_id, t.file_id, t.page_id,DEFAULT) AS page_info

 

This system function provides a plenty of information mainly coming from the page header in tabular format and makes my previous requirement easier to address as show below.

blog 148 - 5 - sys.dm_db_page_info

The good news is this function is officially documented but un/fortunately (as you convenience) for the deep dive study you will still continue to rely on the DBCC PAGE.

Happy troubleshooting!

 

 

Cet article SQL Server 2019 CTP 2.1 – A replacement of DBCC PAGE command? est apparu en premier sur Blog dbi services.

Dump logfile

Jonathan Lewis - Tue, 2018-11-27 03:24

Here’s a little procedure I’ve been using since Oracle 8i to dump the contents of the current log file – I’ve mentioned it several times in the past but never published it, so I’ll be checking for references to it and linking to it.

The code hasn’t changed in a long time, although I did add a query to get the full tracefile name from v$process when that became available. There’s also an (optional) called to dbms_support.my_sid to pick up the SID of the current session that slid into the code when that package became available.


rem
rem     Script:         c_dump_log.sql
rem     Author:         Jonathan Lewis
rem     Dated:          December 2002
rem     Purpose:        Create procedured to dump the current online redo log file.
rem
rem     Last tested
rem             12.2.0.1
rem             11.1.0.7
rem             11.2.0.6
rem             10.2.0.5
rem             10.1.0.4
rem              9.2.0.8
rem              8.1.7.4
rem
rem     Notes:
rem     Must be run as a DBA
rem     Very simple minded - no error trapping
rem     

create or replace procedure dump_log
as
        m_log_name      varchar2(255);
        m_process       varchar2(255);

begin
        select 
                lf.member
        into
                m_log_name
        from
                V$log           lo,
                v$logfile       lf
        where 
                lo.status = 'CURRENT'
        and     lf.group# = lo.group#
        and     rownum = 1
        ;

        execute immediate
        'alter system dump logfile ''' || m_log_name || '''';

        select
                spid
        into
                m_process
        from
                v$session       se,
                v$process       pr
        where
                se.sid = --dbms_support.mysid
                        (select sid from v$mystat where rownum = 1)
        and     pr.addr = se.paddr
        ;

        dbms_output.put_line('Trace file name includes: ' || m_process);

        select
                tracefile
        into
                m_process
        from
                v$session       se,
                v$process       pr
        where
                se.sid = --dbms_support.mysid
                        (select sid from v$mystat where rownum = 1)
        and     pr.addr = se.paddr
        ;

        dbms_output.put_line('Trace file is: ' || m_process);

end;
.
/

show errors

-- drop public synonym dump_log;
create public synonym dump_log for dump_log;
grant execute on dump_log to public;

I don’t use the package often but if I want to find out what redo is generated during a test I usually follow the sequence:

  • alter system switch logfile;
  • do the experiment
  • execute dump_log

If you’re running in a PDB there’s an extra step needed as you can’t “switch logfile” inside a PDB so I’ll either do a log file switch before I start the test or (if there are steps in the test script that could generate a lot of log file I don’t want to see) I include a “pause” in the test script and use another session to do the logfile switch – in both cases the second session has to be connected to the CDB.

You will have noticed the creation of the public synonym and granting of the execute privilege to public. In my own sandbox database that’s a convenience – you may want to be a little more protective in your development and test systems.

The “dump logfile” command has a number of options for selective dumping – I have a note in my file commenting on these options, but I haven’t checked if there are any new ones (or changes to existing ones) for a long time:


alter system dump logfile '{filename}'
        scn min {first SCN to dump}
        scn max {last SCN to dump}
        time min {seconds since midnight at the end of 1st Sept 1987}
        time max {see redo_time_calc.sql}
        layer {integer} opcode {integer} e.g.:
                layer 23        Block Written Records
                layer 5         Undo handling in general
                layer 5 opcode 4        Undo Seg header on commit; or rollback;
                layer 9999 opcode 9999  Trick to validate the whole log file structure
        xid {usn} {slot} {sequence}     -- 10g only, may break on IMU redo (see below)
        objno {object_id}               -- 10g only, may break on IMU redo (see below)
        dba min {datafile no} . {blockno} -- with spaces either side of the dot.
        dba max {datafile no} . {blockno} -- with spaces either side of the dot.
        rba min {log file seq no} . {blockno} -- with spaces either side of the dot.
        rba max {log file seq no} . {blockno} -- with spaces either side of the dot..
(The dots in the last four options becomes invalid syntax in 10g).

The introduction to this note references back to a presentation I did in the year 2000, but the closing comment suggests that I probably haven’t checked the list since some time in the 10g timeline.

The reference to redo_time_calc.sql points to the following script, that expresses the time as the number of seconds since Jan 1988, with the unfortunate simplification that Oracle thinks there are 31 days in every month of the year:


rem
rem     Script:         redo_time_calc3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2012
rem     Purpose:
rem

select 
        86400 * (
                31 *
                        months_between(
                                trunc(sysdate,'MM'),
                                to_date('01-Jan-1988','dd-mon-yyyy')
                        ) +
                sysdate - trunc(sysdate,'MM')
        )       redo_now
from 
        dual
;



select 
        86400 * (
                (sysdate - 10/1440) - trunc((sysdate-10/1440),'MM') + 
                31 * 
                        months_between(
                                trunc((sysdate - 10/1440),'MM'),
                                to_date('01-Jan-1988','dd-mon-yyyy')
                        )
                )               ten_minutes_ago,
        86400 * (
                sysdate - trunc(sysdate,'MM') + 
                31 * 
                        months_between(
                                trunc(sysdate,'MM'),
                                to_date('01-Jan-1988','dd-mon-yyyy')
                        )
                )               redo_now,
        to_char(sysdate,'mm/dd/yyyy hh24:mi:ss')        now
from 
        dual
;

This isn’t a piece of code I use much – the original version (which I published in Oracle Core, p.241) was something I wrote in 2003 and had to adjust by hand each time I used it without realising that I’d got it wrong. Luckily someone pointed out my error and gave me the corrected code a little while after I’d published the book. (It was one of those “why didn’t I think of that” moments – it seemed so obvious after he’d told me the right answer.)

return multiple address to mail_pkg

Tom Kyte - Mon, 2018-11-26 13:26
Hi Tom. I'm trying to get your package mail_pkg (found it on this site) to accept multiple values for the p_to (recipient list). begin mail_pkg.send ( p_sender_email => 'me@acme.com', p_from => 'Oracle Database Account <me@acme.co...
Categories: DBA Blogs

Merge vs Update

Tom Kyte - Mon, 2018-11-26 13:26
MERGE INTO pkt_prty_fncl_st ppst USING tmp_pkt_prty_fstate_clnb_stgg tmp on (tmp.fncl_ast_id = ppst.fncl_ast_id AND tmp.prty_id = ppst.prty_id AND tmp.pkt_pcsg_st_cd = ppst.pkt_pcsg_st_cd AN...
Categories: DBA Blogs

Select first value if exists, otherwise select another value

Tom Kyte - Mon, 2018-11-26 13:26
Hello I have a table like this <code>ID NTYPE 1 0 2 0 3 1 4 2</code> I need a select to get all IDs according of a list of NTYPE (1 to N), but if any of the NTYPE list does not exist then get where NTYPE = 0.. ...
Categories: DBA Blogs

Strange behavior when patching GI/ASM

Yann Neuhaus - Mon, 2018-11-26 12:45

I tried to apply a patch to my 18.3.0 GI/ASM two node cluster on RHEL 7.5.
The first node worked fine, but the second node got always an error…

Environment:
Server Node1: dbserver01
Server Node2: dbserver02
Oracle Version: 18.3.0 with PSU OCT 2018 ==> 28660077
Patch to be installed: 28655784 (RU 18.4.0.0)

First node (dbserver01)
Everything fine:

cd ${ORACLE_HOME}/OPatch
sudo ./opatchauto apply /tmp/28655784/
...
Sucessfull

Secondary node (dbserver02)
Same command but different output:

cd ${ORACLE_HOME}/Patch
sudo ./opatchauto apply /tmp/28655784/
...
Remote command execution failed due to No ECDSA host key is known for dbserver01 and you have requested strict checking.
Host key verification failed.
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

After playing around with the keys I found out, that the host keys had to be exchange also for root.
So I connected as root and made an ssh from dbserver01 to dbserver02 and from dbserver02 to dbserver01.

After I exchanged the host keys the error message changed:

Remote command execution failed due to Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

So I investigated the log file a litte further and the statement with the error was:

/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 dbserver01 \
/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 dbserver01 \
/u00/app/oracle/product/18.3.0/dbhome_1//perl/bin/perl \
/u00/app/oracle/product/18.3.0/dbhome_1/OPatch/auto/database/bin/RemoteHostExecutor.pl \
-GRID_HOME=/u00/app/oracle/product/18.3.0/grid_1 \
-OBJECTLOC=/u00/app/oracle/product/18.3.0/dbhome_1//cfgtoollogs/opatchautodb/hostdata.obj \
-CRS_ACTION=get_all_homes -CLUSTERNODES=dbserver01,dbserver02,dbserver02 \
-JVM_HANDLER=oracle/dbsysmodel/driver/sdk/productdriver/remote/RemoteOperationHelper

Soooooo: dbserver02 starts a ssh session to dbserver01 and from there an additional session to dbserver01 (himself).
I don’t know why but it is as it is….after I did a keyexchange from dbserver01 (root) to dbserver01 (root) the patching worked fine.
At the moment I can not remeber that I ever had to do a keyexchange from the root User on to the same host.

Did you got the same proble or do you know a better way to do that? Write me a comment!

Cet article Strange behavior when patching GI/ASM est apparu en premier sur Blog dbi services.

Shrink Space

Jonathan Lewis - Mon, 2018-11-26 10:37

I have never been keen on the option to “shrink space” for a table because of the negative impact it can have on performance.

I don’t seem to have written about it in the blog but I think there’s something in one of my books pointing out that the command moves data from the “end” of the table (high extent ids) to the “start” of the table (low extent ids) by scanning the table backwards to find data that can be moved and scanning forwards to find space to put it. This strategy can have the effect of increasing the scattering of the data that you’re interested in querying if most of your queries are about “recent” data, and you have a pattern of slowing deleting aging data. (You may end up doing a range scan through a couple of hundred table blocks for data at the start of the table that was once packed into a few blocks near the end of the table.)

In a discussion with a member of the audience at the recent DOAG conference (we were talking about execution plans for queries that included filter subqueries) I suddenly thought of another reason why (for an unlucky person) the shrink space command could be a disaster – here’s a little fragment of code and output to demonstrate the point.


rem
rem     Script:         shrink_scalar_subq.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2018
rem     Purpose:
rem
rem     Versions tested
rem             12.2.0.1
rem

select
        /*+ gather_plan_statistics pre-shrink */
        count(*)
from    (
        select  /*+ no_merge */
                outer.*
        from
                emp outer
        where
                outer.sal > (
                        select  /*+ no_unnest */
                                avg(inner.sal)
                        from
                                emp inner
                        where
                                inner.dept_no = outer.dept_no
                )
        )
;

alter table emp enable row movement;
alter table emp shrink space compact;

select
        /*+ gather_plan_statistics post-shrink  */
        count(*)
from    (
        select  /*+ no_merge */
                outer.*
        from emp outer
        where outer.sal >
                (
                        select /*+ no_unnest */ avg(inner.sal)
                        from emp inner
                        where inner.dept_no = outer.dept_no
                )
        )
;

The two queries are the same and the execution plans are the same (the shrink command doesn’t change the object statistics, after all), but the execution time jumped from 0.05 seconds to 9.43 seconds – and the difference in timing wasn’t about delayed block cleanout or other exotic side effects.


  COUNT(*)
----------
      9498

Elapsed: 00:00:00.05


  COUNT(*)
----------
      9498

Elapsed: 00:00:09.43

The query is engineered to have a problem, of course, and enabling rowsource execution statistics exaggerates the anomaly – but the threat is genuine. You may have seen my posting (now 12 years old) about the effects of scalar subquery caching – this is another example of the wrong item of data appearing in the wrong place making us lose the caching benefit. The emp table I’ve used here is (nearly) the same emp table I used in the 2006 posting, but the difference between this case and the previous case is that I updated a carefully selected row to an unlucky value in 2006, but here in 2018 the side effects of a call to shrink space moved a row from the end of the table (where it was doing no harm) to the start of the table (where it had a disastrous impact).

Here are the two execution plans – before and after the shrink space – showing the rowsource execution stats. Note particularly the number of times the filter subquery ran – jumping from 7 to 3172 – the impact this has on the buffer gets, and the change in time recorded:

----------------------------------------------------------------------------------------
| Id  | Operation             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |      1 |        |      1 |00:00:00.03 |    1880 |
|   1 |  SORT AGGREGATE       |      |      1 |      1 |      1 |00:00:00.03 |    1880 |
|   2 |   VIEW                |      |      1 |    136 |   9498 |00:00:00.03 |    1880 |
|*  3 |    FILTER             |      |      1 |        |   9498 |00:00:00.03 |    1880 |
|   4 |     TABLE ACCESS FULL | EMP  |      1 |  19001 |  19001 |00:00:00.01 |     235 |
|   5 |     SORT AGGREGATE    |      |      7 |      1 |      7 |00:00:00.02 |    1645 |
|*  6 |      TABLE ACCESS FULL| EMP  |      7 |   2714 |  19001 |00:00:00.02 |    1645 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("OUTER"."SAL">)
   6 - filter("INNER"."DEPT_NO"=:B1)


----------------------------------------------------------------------------------------
| Id  | Operation             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |      1 |        |      1 |00:00:09.42 |     745K|
|   1 |  SORT AGGREGATE       |      |      1 |      1 |      1 |00:00:09.42 |     745K|
|   2 |   VIEW                |      |      1 |    136 |   9498 |00:00:11.71 |     745K|
|*  3 |    FILTER             |      |      1 |        |   9498 |00:00:11.70 |     745K|
|   4 |     TABLE ACCESS FULL | EMP  |      1 |  19001 |  19001 |00:00:00.01 |     235 |
|   5 |     SORT AGGREGATE    |      |   3172 |      1 |   3172 |00:00:09.40 |     745K|
|*  6 |      TABLE ACCESS FULL| EMP  |   3172 |   2714 |     10M|00:00:04.33 |     745K|
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("OUTER"."SAL">)
   6 - filter("INNER"."DEPT_NO"=:B1)


Footnote:

For completeness, here’s the code to generate the emp table. It’s sitting in a tablespace using system managed extents and automatic segment space management.


create table emp(
        dept_no         not null,
        sal,
        emp_no          not null,
        padding,
        constraint e_pk primary key(emp_no)
)
as
with generator as (
        select  null
        from    dual
        connect by
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        mod(rownum,6),
        rownum,
        rownum,
        rpad('x',60)
from
        generator       v1,
        generator       v2
where
        rownum <= 2e4 -- > comment to avoid wordpress format issue
;


insert into emp values(432, 20001, 20001, rpad('x',60));
delete /*+ full(emp) */ from emp where emp_no <= 1000;      -- > comment to avoid wordpress format issue
commit;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'EMP',
                method_opt       => 'for all columns size 1'
        );
end;
/



 

Our new product - Katana 18.1 (Machine Learning for Business Automation)

Andrejus Baranovski - Mon, 2018-11-26 04:47
Big day. We announce our brand new product - Katana. Today is first release, which is called 18.1. While working with many enterprise customers we saw a need for a product which would help to integrate machine learning into business applications in more seamless and flexible way. Primary area for machine learning application in enterprise - business automation.


Katana offers and will continue to evolve in the following areas:

1. Collection of machine learning models tailored for business automation. This is the core part of Katana. Machine learning models can run on Cloud (AWS SageMaker, Google Cloud Machine Learning, Oracle Cloud, Azure) or on Docker container deployed On-Premise. Main focus is towards business automation with machine learning, including automation for business rules and processes. Goal is to reduce repetitive labor time and simplify complex, redundant business rules maintenance

2. API layer built to help to transform business data into the format which can be passed to machine learning model. This part provides API to simplify machine learning model usage in customer business applications

3. Monitoring UI designed to display various statistics related to machine learning model usage by customer business applications. UI which helps to transform business data to machine learning format is also implemented in this part

Katana architecture:


One of the business use cases, where we are using Katana - invoice payment risk calculation. UI which is calling Katana machine learning API to identify if invoice payment is at risk:


Get in touch for more information.

DOAG 2018: OVM or KVM on ODA?

Yann Neuhaus - Mon, 2018-11-26 03:51

The DOAG 2018 is over, for me the most important topics were in the field of licensing. The insecurity among the users is great, let’s take virtualization on the ODA, for example:

The starting point: The customer uses Oracle Enterprise Edition, has 2 CPU licenses, uses Dataguard as disaster protection on 2 ODA X7-2M systems and wants to virtualize, he also has 2 application servers that are also to be virtualized.

Sure, if I use the HA variant of the ODA or Standard Edition, this does not concern me, there OVM is used as a hypervisor and this allows hard partitioning. The database system (ODA_BASE) automatically gets its own CPU pool in Virtualized Deployment; additional VMs can be distributed to the rest of the CPU.

On the small and medium models only KVM is available as a hypervisor. This has some limitations: on the one hand there is no virtualized deployment of the ODA 2S / 2M system, on the other hand, the operation of databases as KVM guests is not supported. This means that the ODA must be set up as a bare metal system, the application servers are virtualized in KVM.

What does that mean for the customer described above? We set up the system in bare metal mode, we activate 2 cores on each system, set up the database and set up the Dataguard between primary and standby. The customer costs 2 EE CPU licenses (about $ 95k per price list).

Now he wants to virtualize his 2 application servers and notes that 4 cores are needed per application server. Of 36 cores (per system) but only 2 cores are available, so he also activates 4 more cores (odacli update-cpucore -c 6) on both systems and installs the VM.

But: The customer has also changed his Oracle EE licenses, namely from 1 EE CPU to 3 CPU per ODA, so overall he has to buy 6 CPU licenses (about $ 285k according to the price list)!

Now Oracle propagates that in the future KVM in the virtualization should be the means of choice. However, this will not work without hard partitioning under KVM or the support of databases in KVM machines.

Tammy Bednar (Oracle’s Oracle Database Appliance Product Manager) announced in her presentation “KVM or OVM? Use Cases for Solution in a Box” that solutions to this problem are expected by mid-2019:

– Oracle databases and applications should be supported as KVM guests
– Support for hard partitioning
– Windows guests under KVM
– Tooling (odacli / Web Console) should support the deployment of KVM guests
– A “privileged” VM (similar to the ODA_BASE on the HA models) for the databases should be provided
– Automated migration of OVM guests to KVM

All these measures would certainly make the “small” systems much more attractive for consolidation. It will also help to simplify the “license jungle” a bit and to give the customers a bit more security. I am curious what will come.

Cet article DOAG 2018: OVM or KVM on ODA? est apparu en premier sur Blog dbi services.

AWS re:invent 2018 warm up

Yann Neuhaus - Mon, 2018-11-26 03:07

The Cloud is now part of our job so we have to get a deeper look on the available services to understand and take best advantage of them. The annual AWS conference re:invent has started tonight in The Venetian at Las Vegas and will last until Friday.

AWS logo

Today was a bit special because there were no sessions yet but instead I was able to participate to a ride to Red Rock canyon on a Harley Davidson motorbike.

It’s a 56 miles ride and you can enjoy beautiful landscapes very different from the city and the light of the casinos. We were a small group with around 13 bikes and even if it was a bit cold it was a really nice tour. I really recommend people in Vegas to escape the city for few hours to discover such places like Red Rock or Valley of Fire.

Harley Davidson ride to Red Rock Canyon

 

Then the conference opened on Midnight Madness and an attempt to beat the world record of ensemble air drumming. I don’t know yet if we achieve the goal but I tried to help and participated to the challenge.

invent Midnight Madness

The 1st launch of the week has been also done this evening and it’s a new service called AWS RoboMaker. You can now use AWS cloud to develop new robotics applications and use other services like Lex or Polly to allow your robot to understand voice orders and answer it for example.

Tomorrow the real thing begins with hand-on labs and some sessions, stay tuned.

Cet article AWS re:invent 2018 warm up est apparu en premier sur Blog dbi services.

AWS: AWS Solutions Architect Associate - Practice

Dietrich Schroff - Sun, 2018-11-25 23:00
After reading the book AWS Certified Solutions Architect - Official Study Guide i decided to go for a online exam at https://aws.amazon.com/training/




I had to answer 25 question in about 30 minutes, which was quite exhausting. Only a few minutes after the exam i got the following mail:
Hmmm.
3.0 Specify Secure Applications and Architectures: 50%
An unconvincing result for this area, but with some more reading and more exercises i should get above 80%.

4.0 and 5.0 with 100%: Better than expected.

But is an overall score of 76% enough?
One day later inside my aws certification account the following line appeared:


;-)

Oracle VM Server x86: How to get a redundant network for the heartbeat

Dietrich Schroff - Sun, 2018-11-25 13:56
A while ago i played around with Oracle VM Manager
I was wondering, if i can setup a redundant network for the heartbeat on my virtualbox playground. My question was: Can i add an additional network and stripe the heartbeat over both networks or do i have to configure 2 network interfaces and use bonding.

So let's start:
Open the OVM Manager and go to "Networking":
and hit the green plus to add a network:
Just hit next and provide a name and toggle the checkbox "heartbeat":

Then expand the tree to the new NIC and choose it:

Then mark the row and hit next:
For my use case  i did not add any VLANs - and after all the heartbeat is striped over both networks:
But this is not really true:
Message: OVMRU_001079E Cannot add Ethernet device: eth1 on oraclevm, to network: hearbeat, because server: oraclevm, already has cluster network: 192.168.178.0. [Sat Nov 24 11:39:39 EST 2018]
Hmmm. This means the OVM Manager shows two hooks, but the second one does not work.
After some investigation: The network "heartbeat" was created but the port (eth1) was missing. 
So i removed the "Cluster Heartbeat" and then i added the port eth1 including the checkbox "Virtual Machines".
The ovm server showed up eth1:
# ifconfig |grep ^[a-z,0-9]
108e472f6e Link encap:Ethernet  Hardware Adresse 08:00:27:43:D9:4C 
bond0     Link encap:Ethernet  Hardware Adresse 08:00:27:61:51:35 
c0a8b200  Link encap:Ethernet  Hardware Adresse 08:00:27:61:51:35 
eth0      Link encap:Ethernet  Hardware Adresse 08:00:27:61:51:35 
eth1      Link encap:Ethernet  Hardware Adresse 08:00:27:43:D9:4C 
lo        Link encap:Lokale Schleife 
But adding "Cluster Heartbeat" once again results in a job, which was in status "running" forever.

Conclusion: You should never stripe the "Cluster Heartbeat" over more than one network!

AWS: Logging? CloudTrail!

Dietrich Schroff - Sun, 2018-11-25 10:28
Today took a look at CloudTrail:
CloudTrails provides a view into user activities, by recording their API calls. On the AWS webpages you can find the following graphic:

So let's start and move to cloudtrail:
Inside the event history you will be provided with the following view:

Here you can see my efforts for the posting AWS: How to delete a static website via aws cli.
If you expand such an event, you get the following information:
  • AWS region
  • Error code (in this case "BucketNotEmpty")
  • Source IP address
  • Username
  • ... 

The events will be stored for 90 days and can be downloaded via this button (right above the event table):




$ head -3 event_history.csv
Event ID,Event time,User name,Event name,Resource type,Resource name,AWS access key,AWS region,Error code,Source IP address,Resources
5c0cd873-3cef-449c-9e6a-1809ba827ac1,"2018-11-24, 05:06:47 PM",root,TestEventPattern,,,,eu-west-1,,87.123.BBB.AAA,[]
dcd07bfa-780c-4640-9293-513c35b3db0a,"2018-11-24, 05:05:23 PM",root,ConsoleLogin,,,,us-east-1,,87.123.BBB.AAA,[]

DOAG 2018: Best of Oracle Security 2018

Alexander Kornbrust - Sun, 2018-11-25 05:49

Last week I gave my yearly presentation “Best of Oracle Security 2018” at the DOAG 2018 conference in Nürnberg. In this presentation I talked about different Oracle exploits, a vulnerability in livesql.oracle.com, DNS data exfiltration in Oracle and how to audit SYSDBA connections in Oracle

 

Additionally I talked about the German DSGVO (GDPR) – „Wie wird die DSGVO umgesetzt und welche Lücken/Lügen gibt es?

.

Pages

Subscribe to Oracle FAQ aggregator