Feed aggregator

Merge vs Update

Tom Kyte - Mon, 2018-11-26 13:26
MERGE INTO pkt_prty_fncl_st ppst USING tmp_pkt_prty_fstate_clnb_stgg tmp on (tmp.fncl_ast_id = ppst.fncl_ast_id AND tmp.prty_id = ppst.prty_id AND tmp.pkt_pcsg_st_cd = ppst.pkt_pcsg_st_cd AN...
Categories: DBA Blogs

Select first value if exists, otherwise select another value

Tom Kyte - Mon, 2018-11-26 13:26
Hello I have a table like this <code>ID NTYPE 1 0 2 0 3 1 4 2</code> I need a select to get all IDs according of a list of NTYPE (1 to N), but if any of the NTYPE list does not exist then get where NTYPE = 0.. ...
Categories: DBA Blogs

Strange behavior when patching GI/ASM

Yann Neuhaus - Mon, 2018-11-26 12:45

I tried to apply a patch to my 18.3.0 GI/ASM two node cluster on RHEL 7.5.
The first node worked fine, but the second node got always an error…

Environment:
Server Node1: dbserver01
Server Node2: dbserver02
Oracle Version: 18.3.0 with PSU OCT 2018 ==> 28660077
Patch to be installed: 28655784 (RU 18.4.0.0)

First node (dbserver01)
Everything fine:

cd ${ORACLE_HOME}/OPatch
sudo ./opatchauto apply /tmp/28655784/
...
Sucessfull

Secondary node (dbserver02)
Same command but different output:

cd ${ORACLE_HOME}/Patch
sudo ./opatchauto apply /tmp/28655784/
...
Remote command execution failed due to No ECDSA host key is known for dbserver01 and you have requested strict checking.
Host key verification failed.
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

After playing around with the keys I found out, that the host keys had to be exchange also for root.
So I connected as root and made an ssh from dbserver01 to dbserver02 and from dbserver02 to dbserver01.

After I exchanged the host keys the error message changed:

Remote command execution failed due to Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

So I investigated the log file a litte further and the statement with the error was:

/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 dbserver01 \
/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 dbserver01 \
/u00/app/oracle/product/18.3.0/dbhome_1//perl/bin/perl \
/u00/app/oracle/product/18.3.0/dbhome_1/OPatch/auto/database/bin/RemoteHostExecutor.pl \
-GRID_HOME=/u00/app/oracle/product/18.3.0/grid_1 \
-OBJECTLOC=/u00/app/oracle/product/18.3.0/dbhome_1//cfgtoollogs/opatchautodb/hostdata.obj \
-CRS_ACTION=get_all_homes -CLUSTERNODES=dbserver01,dbserver02,dbserver02 \
-JVM_HANDLER=oracle/dbsysmodel/driver/sdk/productdriver/remote/RemoteOperationHelper

Soooooo: dbserver02 starts a ssh session to dbserver01 and from there an additional session to dbserver01 (himself).
I don’t know why but it is as it is….after I did a keyexchange from dbserver01 (root) to dbserver01 (root) the patching worked fine.
At the moment I can not remeber that I ever had to do a keyexchange from the root User on to the same host.

Did you got the same proble or do you know a better way to do that? Write me a comment!

Cet article Strange behavior when patching GI/ASM est apparu en premier sur Blog dbi services.

Shrink Space

Jonathan Lewis - Mon, 2018-11-26 10:37

I have never been keen on the option to “shrink space” for a table because of the negative impact it can have on performance.

I don’t seem to have written about it in the blog but I think there’s something in one of my books pointing out that the command moves data from the “end” of the table (high extent ids) to the “start” of the table (low extent ids) by scanning the table backwards to find data that can be moved and scanning forwards to find space to put it. This strategy can have the effect of increasing the scattering of the data that you’re interested in querying if most of your queries are about “recent” data, and you have a pattern of slowing deleting aging data. (You may end up doing a range scan through a couple of hundred table blocks for data at the start of the table that was once packed into a few blocks near the end of the table.)

In a discussion with a member of the audience at the recent DOAG conference (we were talking about execution plans for queries that included filter subqueries) I suddenly thought of another reason why (for an unlucky person) the shrink space command could be a disaster – here’s a little fragment of code and output to demonstrate the point.


rem
rem     Script:         shrink_scalar_subq.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2018
rem     Purpose:
rem
rem     Versions tested
rem             12.2.0.1
rem

select
        /*+ gather_plan_statistics pre-shrink */
        count(*)
from    (
        select  /*+ no_merge */
                outer.*
        from
                emp outer
        where
                outer.sal > (
                        select  /*+ no_unnest */
                                avg(inner.sal)
                        from
                                emp inner
                        where
                                inner.dept_no = outer.dept_no
                )
        )
;

alter table emp enable row movement;
alter table emp shrink space compact;

select
        /*+ gather_plan_statistics post-shrink  */
        count(*)
from    (
        select  /*+ no_merge */
                outer.*
        from emp outer
        where outer.sal >
                (
                        select /*+ no_unnest */ avg(inner.sal)
                        from emp inner
                        where inner.dept_no = outer.dept_no
                )
        )
;

The two queries are the same and the execution plans are the same (the shrink command doesn’t change the object statistics, after all), but the execution time jumped from 0.05 seconds to 9.43 seconds – and the difference in timing wasn’t about delayed block cleanout or other exotic side effects.


  COUNT(*)
----------
      9498

Elapsed: 00:00:00.05


  COUNT(*)
----------
      9498

Elapsed: 00:00:09.43

The query is engineered to have a problem, of course, and enabling rowsource execution statistics exaggerates the anomaly – but the threat is genuine. You may have seen my posting (now 12 years old) about the effects of scalar subquery caching – this is another example of the wrong item of data appearing in the wrong place making us lose the caching benefit. The emp table I’ve used here is (nearly) the same emp table I used in the 2006 posting, but the difference between this case and the previous case is that I updated a carefully selected row to an unlucky value in 2006, but here in 2018 the side effects of a call to shrink space moved a row from the end of the table (where it was doing no harm) to the start of the table (where it had a disastrous impact).

Here are the two execution plans – before and after the shrink space – showing the rowsource execution stats. Note particularly the number of times the filter subquery ran – jumping from 7 to 3172 – the impact this has on the buffer gets, and the change in time recorded:

----------------------------------------------------------------------------------------
| Id  | Operation             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |      1 |        |      1 |00:00:00.03 |    1880 |
|   1 |  SORT AGGREGATE       |      |      1 |      1 |      1 |00:00:00.03 |    1880 |
|   2 |   VIEW                |      |      1 |    136 |   9498 |00:00:00.03 |    1880 |
|*  3 |    FILTER             |      |      1 |        |   9498 |00:00:00.03 |    1880 |
|   4 |     TABLE ACCESS FULL | EMP  |      1 |  19001 |  19001 |00:00:00.01 |     235 |
|   5 |     SORT AGGREGATE    |      |      7 |      1 |      7 |00:00:00.02 |    1645 |
|*  6 |      TABLE ACCESS FULL| EMP  |      7 |   2714 |  19001 |00:00:00.02 |    1645 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("OUTER"."SAL">)
   6 - filter("INNER"."DEPT_NO"=:B1)


----------------------------------------------------------------------------------------
| Id  | Operation             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |      1 |        |      1 |00:00:09.42 |     745K|
|   1 |  SORT AGGREGATE       |      |      1 |      1 |      1 |00:00:09.42 |     745K|
|   2 |   VIEW                |      |      1 |    136 |   9498 |00:00:11.71 |     745K|
|*  3 |    FILTER             |      |      1 |        |   9498 |00:00:11.70 |     745K|
|   4 |     TABLE ACCESS FULL | EMP  |      1 |  19001 |  19001 |00:00:00.01 |     235 |
|   5 |     SORT AGGREGATE    |      |   3172 |      1 |   3172 |00:00:09.40 |     745K|
|*  6 |      TABLE ACCESS FULL| EMP  |   3172 |   2714 |     10M|00:00:04.33 |     745K|
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("OUTER"."SAL">)
   6 - filter("INNER"."DEPT_NO"=:B1)


Footnote:

For completeness, here’s the code to generate the emp table. It’s sitting in a tablespace using system managed extents and automatic segment space management.


create table emp(
        dept_no         not null,
        sal,
        emp_no          not null,
        padding,
        constraint e_pk primary key(emp_no)
)
as
with generator as (
        select  null
        from    dual
        connect by
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        mod(rownum,6),
        rownum,
        rownum,
        rpad('x',60)
from
        generator       v1,
        generator       v2
where
        rownum <= 2e4 -- > comment to avoid wordpress format issue
;


insert into emp values(432, 20001, 20001, rpad('x',60));
delete /*+ full(emp) */ from emp where emp_no <= 1000;      -- > comment to avoid wordpress format issue
commit;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'EMP',
                method_opt       => 'for all columns size 1'
        );
end;
/



 

Our new product - Katana 18.1 (Machine Learning for Business Automation)

Andrejus Baranovski - Mon, 2018-11-26 04:47
Big day. We announce our brand new product - Katana. Today is first release, which is called 18.1. While working with many enterprise customers we saw a need for a product which would help to integrate machine learning into business applications in more seamless and flexible way. Primary area for machine learning application in enterprise - business automation.


Katana offers and will continue to evolve in the following areas:

1. Collection of machine learning models tailored for business automation. This is the core part of Katana. Machine learning models can run on Cloud (AWS SageMaker, Google Cloud Machine Learning, Oracle Cloud, Azure) or on Docker container deployed On-Premise. Main focus is towards business automation with machine learning, including automation for business rules and processes. Goal is to reduce repetitive labor time and simplify complex, redundant business rules maintenance

2. API layer built to help to transform business data into the format which can be passed to machine learning model. This part provides API to simplify machine learning model usage in customer business applications

3. Monitoring UI designed to display various statistics related to machine learning model usage by customer business applications. UI which helps to transform business data to machine learning format is also implemented in this part

Katana architecture:


One of the business use cases, where we are using Katana - invoice payment risk calculation. UI which is calling Katana machine learning API to identify if invoice payment is at risk:


Get in touch for more information.

DOAG 2018: OVM or KVM on ODA?

Yann Neuhaus - Mon, 2018-11-26 03:51

The DOAG 2018 is over, for me the most important topics were in the field of licensing. The insecurity among the users is great, let’s take virtualization on the ODA, for example:

The starting point: The customer uses Oracle Enterprise Edition, has 2 CPU licenses, uses Dataguard as disaster protection on 2 ODA X7-2M systems and wants to virtualize, he also has 2 application servers that are also to be virtualized.

Sure, if I use the HA variant of the ODA or Standard Edition, this does not concern me, there OVM is used as a hypervisor and this allows hard partitioning. The database system (ODA_BASE) automatically gets its own CPU pool in Virtualized Deployment; additional VMs can be distributed to the rest of the CPU.

On the small and medium models only KVM is available as a hypervisor. This has some limitations: on the one hand there is no virtualized deployment of the ODA 2S / 2M system, on the other hand, the operation of databases as KVM guests is not supported. This means that the ODA must be set up as a bare metal system, the application servers are virtualized in KVM.

What does that mean for the customer described above? We set up the system in bare metal mode, we activate 2 cores on each system, set up the database and set up the Dataguard between primary and standby. The customer costs 2 EE CPU licenses (about $ 95k per price list).

Now he wants to virtualize his 2 application servers and notes that 4 cores are needed per application server. Of 36 cores (per system) but only 2 cores are available, so he also activates 4 more cores (odacli update-cpucore -c 6) on both systems and installs the VM.

But: The customer has also changed his Oracle EE licenses, namely from 1 EE CPU to 3 CPU per ODA, so overall he has to buy 6 CPU licenses (about $ 285k according to the price list)!

Now Oracle propagates that in the future KVM in the virtualization should be the means of choice. However, this will not work without hard partitioning under KVM or the support of databases in KVM machines.

Tammy Bednar (Oracle’s Oracle Database Appliance Product Manager) announced in her presentation “KVM or OVM? Use Cases for Solution in a Box” that solutions to this problem are expected by mid-2019:

– Oracle databases and applications should be supported as KVM guests
– Support for hard partitioning
– Windows guests under KVM
– Tooling (odacli / Web Console) should support the deployment of KVM guests
– A “privileged” VM (similar to the ODA_BASE on the HA models) for the databases should be provided
– Automated migration of OVM guests to KVM

All these measures would certainly make the “small” systems much more attractive for consolidation. It will also help to simplify the “license jungle” a bit and to give the customers a bit more security. I am curious what will come.

Cet article DOAG 2018: OVM or KVM on ODA? est apparu en premier sur Blog dbi services.

AWS re:invent 2018 warm up

Yann Neuhaus - Mon, 2018-11-26 03:07

The Cloud is now part of our job so we have to get a deeper look on the available services to understand and take best advantage of them. The annual AWS conference re:invent has started tonight in The Venetian at Las Vegas and will last until Friday.

AWS logo

Today was a bit special because there were no sessions yet but instead I was able to participate to a ride to Red Rock canyon on a Harley Davidson motorbike.

It’s a 56 miles ride and you can enjoy beautiful landscapes very different from the city and the light of the casinos. We were a small group with around 13 bikes and even if it was a bit cold it was a really nice tour. I really recommend people in Vegas to escape the city for few hours to discover such places like Red Rock or Valley of Fire.

Harley Davidson ride to Red Rock Canyon

 

Then the conference opened on Midnight Madness and an attempt to beat the world record of ensemble air drumming. I don’t know yet if we achieve the goal but I tried to help and participated to the challenge.

invent Midnight Madness

The 1st launch of the week has been also done this evening and it’s a new service called AWS RoboMaker. You can now use AWS cloud to develop new robotics applications and use other services like Lex or Polly to allow your robot to understand voice orders and answer it for example.

Tomorrow the real thing begins with hand-on labs and some sessions, stay tuned.

Cet article AWS re:invent 2018 warm up est apparu en premier sur Blog dbi services.

AWS: AWS Solutions Architect Associate - Practice

Dietrich Schroff - Sun, 2018-11-25 23:00
After reading the book AWS Certified Solutions Architect - Official Study Guide i decided to go for a online exam at https://aws.amazon.com/training/




I had to answer 25 question in about 30 minutes, which was quite exhausting. Only a few minutes after the exam i got the following mail:
Hmmm.
3.0 Specify Secure Applications and Architectures: 50%
An unconvincing result for this area, but with some more reading and more exercises i should get above 80%.

4.0 and 5.0 with 100%: Better than expected.

But is an overall score of 76% enough?
One day later inside my aws certification account the following line appeared:


;-)

Oracle VM Server x86: How to get a redundant network for the heartbeat

Dietrich Schroff - Sun, 2018-11-25 13:56
A while ago i played around with Oracle VM Manager
I was wondering, if i can setup a redundant network for the heartbeat on my virtualbox playground. My question was: Can i add an additional network and stripe the heartbeat over both networks or do i have to configure 2 network interfaces and use bonding.

So let's start:
Open the OVM Manager and go to "Networking":
and hit the green plus to add a network:
Just hit next and provide a name and toggle the checkbox "heartbeat":

Then expand the tree to the new NIC and choose it:

Then mark the row and hit next:
For my use case  i did not add any VLANs - and after all the heartbeat is striped over both networks:
But this is not really true:
Message: OVMRU_001079E Cannot add Ethernet device: eth1 on oraclevm, to network: hearbeat, because server: oraclevm, already has cluster network: 192.168.178.0. [Sat Nov 24 11:39:39 EST 2018]
Hmmm. This means the OVM Manager shows two hooks, but the second one does not work.
After some investigation: The network "heartbeat" was created but the port (eth1) was missing. 
So i removed the "Cluster Heartbeat" and then i added the port eth1 including the checkbox "Virtual Machines".
The ovm server showed up eth1:
# ifconfig |grep ^[a-z,0-9]
108e472f6e Link encap:Ethernet  Hardware Adresse 08:00:27:43:D9:4C 
bond0     Link encap:Ethernet  Hardware Adresse 08:00:27:61:51:35 
c0a8b200  Link encap:Ethernet  Hardware Adresse 08:00:27:61:51:35 
eth0      Link encap:Ethernet  Hardware Adresse 08:00:27:61:51:35 
eth1      Link encap:Ethernet  Hardware Adresse 08:00:27:43:D9:4C 
lo        Link encap:Lokale Schleife 
But adding "Cluster Heartbeat" once again results in a job, which was in status "running" forever.

Conclusion: You should never stripe the "Cluster Heartbeat" over more than one network!

AWS: Logging? CloudTrail!

Dietrich Schroff - Sun, 2018-11-25 10:28
Today took a look at CloudTrail:
CloudTrails provides a view into user activities, by recording their API calls. On the AWS webpages you can find the following graphic:

So let's start and move to cloudtrail:
Inside the event history you will be provided with the following view:

Here you can see my efforts for the posting AWS: How to delete a static website via aws cli.
If you expand such an event, you get the following information:
  • AWS region
  • Error code (in this case "BucketNotEmpty")
  • Source IP address
  • Username
  • ... 

The events will be stored for 90 days and can be downloaded via this button (right above the event table):




$ head -3 event_history.csv
Event ID,Event time,User name,Event name,Resource type,Resource name,AWS access key,AWS region,Error code,Source IP address,Resources
5c0cd873-3cef-449c-9e6a-1809ba827ac1,"2018-11-24, 05:06:47 PM",root,TestEventPattern,,,,eu-west-1,,87.123.BBB.AAA,[]
dcd07bfa-780c-4640-9293-513c35b3db0a,"2018-11-24, 05:05:23 PM",root,ConsoleLogin,,,,us-east-1,,87.123.BBB.AAA,[]

DOAG 2018: Best of Oracle Security 2018

Alexander Kornbrust - Sun, 2018-11-25 05:49

Last week I gave my yearly presentation “Best of Oracle Security 2018” at the DOAG 2018 conference in Nürnberg. In this presentation I talked about different Oracle exploits, a vulnerability in livesql.oracle.com, DNS data exfiltration in Oracle and how to audit SYSDBA connections in Oracle

 

Additionally I talked about the German DSGVO (GDPR) – „Wie wird die DSGVO umgesetzt und welche Lücken/Lügen gibt es?

.

Oracle ADF + Jasper Visualize.js = Awesome

Andrejus Baranovski - Sun, 2018-11-25 02:36
This week I was working on a task to integrate Jasper Visualize.js into Oracle ADF application JSF page fragment. I must say integration was successful and Jasper report renders very well in Oracle ADF screen with the help of Visualize.js. Great thing about Visualize.js - it renders report in ADF page through client side HTML/JS, there is no iFrame. Report HTML structure is included into HTML generated by ADF, this allows to use CSS to control report size and make it responsive.

To prove integration, I was using ADF application with multiple regions - ADF Multi Task Flow Binding and Tab Order. Each region is loaded with ADF Faces tab:


One of the tabs display region with Jasper report, rendered with Visualize.js:


Check client side generated code. You should see HTML from Visualize.js inside ADF generated HTML structure:


It is straightforward to render Jasper report with Visualize.js in Oracle ADF. Add JS resource reference to Visualize.js library, define DIV where report supposed to be rendered. Add Visualize.js function to render report from certain path, etc.:


Sample code is available on my GitHub repo.

Flashback to the DOAG conference 2018

Yann Neuhaus - Sat, 2018-11-24 14:40

Each year, since the company creation in 2010, dbi services attends the DOAG conference in Nürnberg. Since 2013 we even have a booth.

The primary goal of participating to the DOAG Conference, is to get an overview about the main trends in the Oracle business. Furthermore, this conference and our booth allow us to welcome our Swiss and German customers and thank them for their trust. They’re always pleased to receive some nice Swiss Chocolate produced in Delémont (Switzerland), city of our Headquarter.

But those are not the only reasons why we attend this event. The DOAG conference is also a way to promote our expertise with our referents and to thank our performing consultants for their work all over the year. We consider the conference as a way to train people and improve their skills.

Finally some nice social evenings take place, first of all the Swiss Oracle User Group (SOUG) “Schweizer Abend”, the Tuesday Evening, secondly the “DOAG party” on Wednesday evening. dbi services being active in the Swiss Oracle User Group, we always keep a deep link to the Oracle community.

As a Chief Sales Officer I tried to get an overview of the main technical “Oracle trends”, through the successes of our sessions (9 in total) all over the conference. The “success” being measured in term of number of participants to those sessions.

At a first glance I did observe a kind of “stagnation” of the interest about Cloud topics. I can provide several evidences and explanations about that. First of all the Key Note during the first day presenting a study over German customers concerning the cloud adoption didn’t reveal any useful information, according to me. The Cloud adoption increases, however there are still some limitations in the deployment of Cloud solutions because of security issues and in particular the cloud act.

Another possible reason about the “small” interest about Cloud topics during the conference, according to me, relies on the fact that Cloud became a kind of “commodity”. Furthermore, we all have to admit that Oracle has definitively not a leadership position in this business. Amazon, Azure and Google definitively are the leaders in this business and Oracle remains a “small” challenger.

Our session from Thomas Rein did not had so much attendees, even if we really presented a concrete use case about Oracle Cloud usage and adoption. The DOAG conference is a user group conference, Techies mostly attend the conference and Techies have to deal with concrete issues, currently the Oracle Cloud does not belong to them.

So what were the “main topics” according to what I could observe ?

Open Source had a huge success for us, both the MySQL and the two PostgreSQL tracks were very very successful, thanks to Elisa Usai and Daniel Westermann.

Some general topics like an “introduction to Blockchain” also had a huge success, thanks to Alain Lacour for this successful session.

Finally the “classicals”, like DB tuning on the old-fashion “On Prem” architectures also had a huge success, thanks to our technology leader Clemens Bleile and to Jérôme Witt who explained all about the I/O internals (which are of course deeply link with performance issues).

Thanks to our other referents: Pascal Brand (Implement SAML 2.0 SSO in WLS and IDM Federation Services) and our CEO David Hueber (ODA HA: What about VMs and backup?) who presented some more “focused” topics.

I use this Blog post to also thank the Scope Alliance and in particular Esentri for the very nice party on Wednesday Evening, beside hard work, hard party is also necessary :-)

Below, Daniel Westermann with our customer “die Mobiliar” on the stage, full room :

IMG_5143

Cet article Flashback to the DOAG conference 2018 est apparu en premier sur Blog dbi services.

Creating Database Deployment (DBCS | DBaaS): Oracle Cloud Certification [1Z0-160]

Online Apps DBA - Sat, 2018-11-24 08:14

Move ahead in your journey towards Oracle Cloud Certification by visiting: https://k21academy.com/1z016013 and learn about: ✔ Options For Deploying Database on Cloud ✔ Difference between OCI & OCI-Classic ✔ Compute Shape & Storage ✔ Scale-Up & Back-Ups Move ahead in your journey towards Oracle Cloud Certification by visiting: https://k21academy.com/1z016013 and learn about: ✔ Options For […]

The post Creating Database Deployment (DBCS | DBaaS): Oracle Cloud Certification [1Z0-160] appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Bait and Switch

Jonathan Lewis - Sat, 2018-11-24 03:56

Just what you need after a long hike in the Lake District:

AWS: What services are free of charge? How to control your costs...(part 2)

Dietrich Schroff - Sat, 2018-11-24 02:47
In November i did several tests with AWS:
A big question is: Was this really free of charge?
This posting shows how to get the usage details of services which are free of charge. 

Here now some details about EKS, ECS and VPC. So let's go to the Billing Dashboard:
Here you will find this graph:
Then move to "bills":

Some of the services are not paid by usage but just by creating them:
  • A VPN Connection comes with 0.05$ per Hour.
  • Kubernetes costs 0.2$ per hour
So if you want to explore AWS, you have to be fast - otherwise you have to pay for being slow ;-)

AWS: What services are free of charge? How to control your costs...

Dietrich Schroff - Fri, 2018-11-23 14:39
In November i did several tests with AWS:
A big question is: Was this really free of charge?

Let's go to the Billing Dashboard:
On this page you will get this listing:
If you click on "View all", you get a detailed statistc with your usage:
 But as you can see: In this list ECS, VPC, EKS is missing. So the costs for these services i will show in this posting.


My DOAG Debut

Yann Neuhaus - Fri, 2018-11-23 08:50

Unbelievable! After more than 10 years working in the Oracle Database environment, this year was my first participation at the DOAG Conference + Exhibition.

After a relaxed travel to Nürnberg with all the power our small car could provide on the German Autobahn, we arrived at the Messezentrum.
With the combined power of our dbi services’ team, the booth was ready in no time and we could switch to the more relaxed part of the day and ended up in our hotel’s bar with other DOAG participants.

The next few days were a firework of valuable sessions, stimulating discussions and some after hour parties who gave me to think about my life decisions and led me to the question: Why did it take me so long for participating in the DOAG Conference + Exhibition?

It would make this post unreadable long and boring if I would sum up all sessions I attended.
So I will just mention a few highlights with the links to the presentations:

Boxing-Gloves-Icons

Boxing Gloves Vectors by Creativology.pk

And of course, what must be mentioned is The Battle: Oracle vs. Postgres: Jan Karremans vs. Daniel Westermann

The red boxing glow (for Oracle) represents Daniel Westermann, Oracle expert for many many years who now is the Open Infrastructure Technologie Leader @ dbi services, while Jan Karremans, Senior Sales Engineer at Enterprise DB put on the blue glow (for Postgres). The room was fully packed with over 200 people who have more sympathy for Oracle.

 Oracle vs. Postgres

The Battle: Oracle vs. Postgres

Knowing how much Daniel loves the Open Source database it was inspiring to see how eloquent he defended the Oracle system and brought Jan multiple times into troubles.
It was a good and brave fight between the opponents in which Daniel had the better arguments and gained a win after points.
For the next time, I would wish to see Daniel on the other side defending Postgres because I am sure he could fight down almost every opponent.

In the end, this DOAG was a wonderful experience and I am sure it won’t take another 10 years until I come back.

PS: I could write about the after party, but as you know, what happens at the after party stays at the after party expect the headache, this little b… stays a little bit longer.

PPS: On the last day I’ve got a nice little present from virtual7 for winning the F1 grand prix challenge. I now exactly on which dbi event we will open this bottle, stay tuned…
IMG_20181122_153112

Cet article My DOAG Debut est apparu en premier sur Blog dbi services.

Oracle Database Cloud Service (DBCS) Overview & Offerings : Cloud Certification 1Z0-160

Online Apps DBA - Fri, 2018-11-23 07:41

To strengthen your preparation for Oracle Cloud Certification for DBAs (1Z0-160), visit: https://k21academy.com/1z016012 & learn about: ✔ Database Cloud Service Overview & Offerings ✔ Various Service Models in Cloud ✔ What happens when you configure Oracle Database in Cloud & much more… To strengthen your preparation for Oracle Cloud Certification for DBAs (1Z0-160), visit: https://k21academy.com/1z016012 […]

The post Oracle Database Cloud Service (DBCS) Overview & Offerings : Cloud Certification 1Z0-160 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator