Feed aggregator

GNW01: In-Memory Processing for Databases

Tanel Poder - Mon, 2016-03-28 00:39

Hi, it took a bit longer than I had planned, but here’s the first Gluent New World webinar recording!

You can also subscribe to our new Vimeo channel here – I will announce the next event with another great speaker soon ;-)

A few comments:

  • Slides are here
  • I’ll figure a good way to deal with offline follow-up Q&A later on, after we’ve done a few of these events

If you like this stuff, please share it too – let’s make this series totally awesome!

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Create a PrimaryKey on existing table without unique values

Learn DB Concepts with me... - Mon, 2016-03-28 00:25
lets assume that there was a table ABC that was already existing in the database and you want to add an additional column with unique primary key values.


sql to create table ABC :

  CREATE TABLE "ABC"
   (    "USERNAME" VARCHAR2(30 BYTE) NOT NULL ENABLE,
    "USER_ID" NUMBER NOT NULL ENABLE,
    "CREATED" DATE NOT NULL ENABLE )
   TABLESPACE "USERS" ;

Now we  can add an additional column ID which will be populated with all unique values for PrimaryKey.

alter table abc add(ID NUMBER);

We will now create a sequence and get the values from the seq and insert them into table ABC new ID column:

CREATE SEQUENCE SEQ_ID
START WITH 1
INCREMENT BY 1
MAXVALUE 999999
MINVALUE 1
NOCYCLE;

Now insert the unique values into the database with below sql:
UPDATE abc SET ID = SEQ_ID.NEXTVAL;


now you can add unique constraint (or) add primary key constraint to table,so that it wont take any more duplicate value into the table.

alter table abc add primarykey (ID);
Categories: DBA Blogs

April 20: Eastern Bank―Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Sun, 2016-03-27 19:19

Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, April 20, 2016. Ed Saras, Sr. Vice President Total Rewards & HR Operations from Eastern Bank, will talk about Eastern's journey to the cloud.

The company had disparate HR system solutions that were impacting its ability to fully understand its talent base, which made it difficult to effectively develop and grow the workforce. Reporting and analysis capability was limited, HR processes were cumbersome, and employees and managers had limited visibility to on-line information. Eastern needed to make improvements to eliminate paper processes, replace its ADP payroll application as the system of record, and consolidate other bolt on benefits, compensation, performance, and recruiting modules.

Register now to attend the live Forum on Wednesday, April 20, 2016, at 9:00 a.m. PT and learn more about Eastern Bank’s experience with Oracle HCM Cloud.

Partner Webcast – Oracle Java Cloud Service: Getting Started with

Oracle Java Cloud Service is a complete platform and infrastructure cloud solution for building, deploying, and managing Java EE applications. Your Businesses can maximize productivity and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Now an OCP 12c

Hemant K Chitale - Sun, 2016-03-27 10:59
I have upgraded my Oracle Database Certified Professional status from 11g to 12c, having passed the 1Z0-060 Exam last Monday.

This is the list of material I used :

1.  Documentation :  Oracle 12c Database Documentation.  Not just the New Features Guide but a large number of pages spread throughout the documentation.  I went through *selected* chapters / pages of the Administrator's Guide, Reference Guide, PLSQL Package Guide, Developers Guide and Backup and Recovery Guide

2.  Books :
a.  Oracle Database 12c New Features  by Robert G. Freeman  (Oracle Press)
b.  OCP Upgrade to Oracle Database 12c Exam Guide  by Sam R. Alapati  (Oracle Press)
c.  Study Guide for 1Z0-060 Upgrade to Oracle Database 12c  by Matthew Morris (Self-Published as Oracle Certification Prep)

3.  FlashCards
a.  FlashCards with Test Software for 1Z0-060 at https://www.selftestsoftware.com
b.  FlashCards OCPFlash1Z0-060 on Google PlayStore  by Matthew Morris

4.  Practice Tests
a.  Practice Tests and Test Software for 1Z0-060 at https://www.selftestsoftware.com
b.  Oracle Certification Prep Practice Test for 1Z0-060  at http://www.oraclecertificationprep.com

5.  Links to other Resources :  http://www.oraclecertificationprep.com and  www.oracle-base.com


Note : This does NOT mean that I didn't have practice environments.  The books alone aren't sufficient.   I created three practice environments :
a.  Fresh Install of Oracle Linux, RDBMS 12.1.0.1 and creation NonCDB database on FileSystem
b. Donwloaded Oracle Virtual Box PreBuilt VM with 12.1.0.2 and MultiTenant Database
c. Fresh Install of Oracle Linux, creation of disks for ASM, install of Grid Infrastructure and RDBMS 12.1.0.2 and creation of NonCDB database on ASM
.
.
.

Categories: DBA Blogs

#Oracle University Expert Summit Berlin 2016

The Oracle Instructor - Sun, 2016-03-27 06:20

Join us in Berlin, 18th – 20th April. The event will take place at the Adlon Hotel with Jonathan Lewis, Pete Finnigan, Christian Antognini, Javier de la Torre Medina and myself as speakers. We have over 70 enrollments already, so take action to secure your seat in time!

Expert Summit Berlin 2016

My topic will be Rolling Upgrade from 11g to 12c, with a special focus on doing it with Transient Logical Standby. In a live demonstration I will start with a Primary and a Physical Standby Database both running 11.2.0.4 in Maximum Availability protection mode using the Data Guard Broker. This is likely one of the most common setups today. We will then see how to upgrade this configuration to 12.1.0.2 with minimum downtime.

When I did this seminar before during another Expert Summit in Dubai, I was still using 11.2.0.1 as the initial release, then upgrading it to 11.2.0.3. It took me some time and effort to update my demo environment and the lessons to cover more recent and meanwhile more relevant versions. Hope to see you there:-)


Categories: DBA Blogs

Compression -- 5 : OLTP Compression

Hemant K Chitale - Sat, 2016-03-26 10:03
Taking the test case from my first Blog Post on compression,  is there any way to support OLTP ?
As demonstrated in the first Blog Post and here, once the table undergoes DML, its size changes.

PDB1@ORCL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name in ('SOURCE_DATA','COMPRESSED_1')
4 /

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
COMPRESSED_1 TABLE 12
SOURCE_DATA TABLE 49

PDB1@ORCL>
PDB1@ORCL> update compressed_1
2 set owner=owner;

364496 rows updated.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_1';

BYTES/1048576
-------------
22

PDB1@ORCL>
PDB1@ORCL> update compressed_1
2 set owner=substr(owner||owner,1,30);

364496 rows updated.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_1';

BYTES/1048576
-------------
58

PDB1@ORCL>

So, I setup another table that supports OLTP compression.  This requires the Advanced Compression licence.  Make sure that you have purchased the licence !

PDB1@ORCL> create table compressed_2 tablespace hemant COMPRESS FOR OLTP
2 as select * from source_data where 1=2;

Table created.

PDB1@ORCL> insert /*+ APPEND */ into compressed_2
2 select * from source_data;

364496 rows created.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_2';

BYTES/1048576
-------------
13

PDB1@ORCL> update compressed_2
2 set owner=owner;

364496 rows updated.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_2';

BYTES/1048576
-------------
13

PDB1@ORCL> update compressed_2
2 set owner=substr(owner||owner,1,30);

364496 rows updated.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_2';

BYTES/1048576
-------------
29

PDB1@ORCL>
PDB1@ORCL> select table_name, pct_free, compression, compress_for
2 from user_tables
3 where table_name like 'COMPRESS%'
4 order by 1;

TABLE_NAME PCT_FREE COMPRESS
------------------------------ ---------- --------
COMPRESS_FOR
------------------------------
COMPRESSED_1 0 ENABLED
BASIC

COMPRESSED_2 10 ENABLED
ADVANCED


PDB1@ORCL>


Note the initial size of COMPRESSED_2 is slightly large because it starts with PCT_FREE=10.
The UPDATE does seem to take longer to run.  COMPRESS FOR OLTP preserves PCT_FREE at 10. But it does handle UPDATEs better than BASIC Compression. However, if you have no subsequent UPDATEs to the data, BASIC Compression (which does not need an additional licence when running Enterprise Edition) will suffice.

I urge you to *TEST* COMPRESS FOR OLTP with the different types of UPDATE operations that you actually have in your live database before you choose to implement it.

In 12c, COMPRESS FOR OLTP is called ROW STORE COMPRESS ADVANCED.  I presented the 11g style COMPRESS FOR OLTP syntax so that you could use it in 11g.
.
.
.


Categories: DBA Blogs

WebLogic Stuff

Tim Hall - Sat, 2016-03-26 04:11

glasses-272399_1280-smallI’ve written a bunch of stuff about WebLogic over the last few years, but it’s kind-of scattered throughout the site. When I was gearing up for my session at the Middleware SIG recently I figured it was about time I collected it all together, so I added a new WebLogic page to the site. It’s on the menu and the quick links.

Much the same as the SQL and PL/SQL pages, it really is just a bunch of links to other stuff. I tend to put my articles in version-specific locations, because it suits me, but I realise it’s not ideal for everyone. I think this gives me the best of both worlds.

I’m always a little nervous about bringing attention to articles on certain subjects on my website. If you’ve seen me present, you’ll know I’m quick to point out I’m not “the WebLogic guy”. It’s something I’m involved in, but I would never dream of making out I’m the man. If WebLogic is your thing, there are better people to follow than me!

Having said that, the website has always been me writing about the stuff I’m doing, and that’s the way it’s going to stay. If it helps you, that’s great. If not, I’m sorry, but there are lots of other websites to read.

Have Fun out There

Oracle AppsLab - Fri, 2016-03-25 21:40

Yesterday, I talked, in part, about how we can, if we choose, work all the time, from the moment we open our eyes, until we close them for sleep.

So today’s as good a Friday as any to remind you to balance all that work with some fun.

We here at the AppsLab are big proponents of enjoying what we do, and today reminds me of that. Here’s a sample of what Anthony (@anthonyslai), Raymond (@yuhuaxie) and Tony did today.

Here are Tony and Raymond showing the IoT Nerf gun to a group of dTech (@dTechHS) students attending an Oracle Education Foundation (@ORCLCitizenship) workshop.

dtech

And here’s Anthony showing our Gadget Lab demos to some kids on Spring Break.

anthony

We make a conscious effort to take time out and show our fun stuff to kids. Why? Because fun is important.

Back in 2012, I pondered what I could show at Kscope12 that would spice up my presentation. I wanted to add something fun to my session, and after chatting with Noel, we settled on the Rock ’em Sock ’em robots controlled by text/phone as a fun way to keep people’s attention.

During the conference, I stumbled upon something. Turns out when you show people something fun, their creative juices flow, and you get lots of cool ideas. Like this:

RS3499_151029_OAUXHeadquartersLabTours_0262-scr

I’m not an expert in every functional area of business. I may know some Financials from my years as an E-Business Suite product manager, but I don’t know much about sales or HR or supply chain.

So, when we have something we think will be important for users in the not-so-distant-future, i.e. an emerging technology, we often build a fun demo to help those domain specific uses rise to the surface.

People play with the demo, they have some fun with it, and we talk about ways that particular technology could apply to their everyday work.

This works, believe it or not, and we’ve been repeating that formula with great results for several years. It’s part of our strategy.

For example, robot arms and race cars to investigate gesture as an input device, Internet of Things for when things happen and for connecting dumb things to the Internet, and using your mind to control to drive a robotic ball.

It’s become an annual team activity to come up with the year’s fun demo, and everyone loves the fun demos, building them, showing them, playing with them.

So have fun out there. We will.

Now, if only I could talk Noel into resurrecting those Rock ’em Sock ’em robots.Possibly Related Posts:

Interview with PeopleSoft Administrator Podcast

David Kurtz - Fri, 2016-03-25 13:59
I recently recorded an interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast. It has been spread over three episodes. There is lots of other good stuff on the website and other episodes that are well worth listening to.
(25 March 2016) #21 - Temporary Tables
(8 April 2016) #23 - The Application Server
(15 April 2016) #24 - Application Server Tuning You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

ADF BC Version Number and Change Indicator to Track Changed Rows

Andrejus Baranovski - Fri, 2016-03-25 11:12
One of the common use cases in enteprise applications is to track concurrent user changes. There are two types of changes possible - when two real users are changing data in the same row, or when single user is changing data and same row is updated by PL/SQL procedure/function (all happen in the same user session). In the first case, we would like to inform a user - row data was changed (two different users changing data). In the second case, there is no need to inform user (data wasn't changed by another real user, it was changed by PL/SQL function/procedure invoked in the same session).

ADF BC by default tracks all attributes and checks if they values were changed. This would mean, if any attribute value is changed directly in DB, EO cache will be out of synch and changed row error will be reported. We can minimize number of attributes to be checked to single attribute - Version Number. There must be additional number type column created in DB table, we are going to use it for Version Number in the EO. EO attribute should be marked as Change Indicator and set to be "version number" in Track Change History property. This means, row will be considered as changed by another user, only if Version Number attribute value will be changed. All changes happening directly from PL/SQL will not increment Version Number (it will be incremented only for the changes submitted from ADF BC):


There should be new column created in the DB for Version Number attribute:


Let's see how it works. I will use two different browser sessions, to simulate two users. Change value for First Name, take a look into Version Number. Value is equal to 4, before Save:


Data is saved successfully to DB. New value 5 is assigned automatically for Version Number:


Switch to different session now. Version Number is equal to previous value 4 (this session was opened before recent commit from first session). Both sessions are working with the same row. Change value for Salary:


Try to save this change. You will get standard error message, informing user about changes in the same row. This happens, because Version Number value doesn't match with recent Version Number for the same record:


At the same time, Version Number is synchronized automatically (updated to 5) and user can try to commit his changes again. This time data is saved and Version Number is increased to 6:


We should check now, how it works data is changed in the DB directly, without increasing Version Number (the case of update happening from PL/SQL). I'm changing Employee_Id = 100 record directly in DB (changing Salary value). Version Number is not changed:


Go back to Web session and change First Name value, press Save:


Despite data was changed in DB, save was successful - Version Number was recent. First Name was successfully changed and Version Number was increased to 7:


Download sample application (make sure to create manually VERSION_NO(10, 0) column in the EMPLOYEES DB table) - ADFChangedRowApp.zip.

Links for 2016-03-24 [del.icio.us]

Categories: DBA Blogs

ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET

Learn DB Concepts with me... - Fri, 2016-03-25 00:08
In this scenario I am trying to increase the value of parameter memory_max_target. My initial memory_max_target = 804 I want to increase it to 900

SQL> show parameter sga

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga     boolean FALSE
pre_page_sga     boolean FALSE
sga_max_size     big integer 804M
sga_target     big integer 0

SQL> show parameter max_target

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
memory_max_target     big integer 804M

SQL> show parameter memory

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address     integer 0
memory_max_target     big integer 804M
memory_target     big integer 804M
shared_memory_address     integer 0

SQL> alter system set memory_max_target=900 scope=spfile;

System altered.

SQL> show parameter memory;

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address     integer 0
memory_max_target     big integer 804M
memory_target     big integer 804M
shared_memory_address     integer 0

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET
SQL> startup mount;
ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET
SQL> startup nomount;
ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET

Since we can't login into DB to check the value  that was set. Lets create pfile and check the actual value.

SQL> create pfile from spfile;

File created.

[oracle@Linux01 ~]$ cd $ORACLE_HOME/dbs

[oracle@Linux01 dbs]$ ls -ll

[oracle@Linux01 dbs]$ vi initDB11G.ora


Haha .. here is the problem in my case.




*********************************************************************************
In my case the problem is that, I didn't mention the MEMORY_MAX_TARGET in MB
Changing the value to MB did the trick
*********************************************************************************




[oracle@Linux01 dbs]$ sqlplus  /"AS sysdba"

SQL*Plus: Release 11.2.0.1.0 Production on Thu Mar 24 23:46:18 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to an idle instance.

Below reboot not needed but since I want to use spfile. I did it

SQL> startup pfile='$ORACLE_HOME/dbs/initDB11G.ora';
ORACLE instance started.

Total System Global Area  939495424 bytes
Fixed Size    2218952 bytes
Variable Size  675284024 bytes
Database Buffers  255852544 bytes
Redo Buffers    6139904 bytes
Database mounted.
Database opened.
SQL> create spfile from pfile;

File created.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area  939495424 bytes
Fixed Size    2218952 bytes
Variable Size  675284024 bytes
Database Buffers  255852544 bytes
Redo Buffers    6139904 bytes
Database mounted.
Database opened.
SQL>


SQL> show parameter memory;

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address     integer 0
memory_max_target     big integer 900M
memory_target     big integer 800M
shared_memory_address     integer 0
Categories: DBA Blogs

Tablespace Free w Fragmentation Info

Michael Dinh - Thu, 2016-03-24 14:43

I have been conversing with Liron Amitzi on twitter about his tablespace free SQL script.

He accommodate my request to add fragmentation details so I can verify against what I have.

Looks like I did it right.

You can find Liron script at this blog – Tablespaces Free Space and Stuff

oracle@arrow:tiger:/media/sf_working/sql
$ sysdba @ts_free_space.sql

SQL*Plus: Release 11.2.0.4.0 Production on Thu Mar 24 12:35:23 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


TABLESPACE_NAME                   CURR_SIZE     MAX_SIZE    FREE_SIZE    USED_SIZE     PCT_FREE PCT_FREE_TOTAL  FREE_CHUNKS LARGEST_CHUNK
------------------------------ ------------ ------------ ------------ ------------ ------------ -------------- ------------ -------------
USERS                                  1024         1024      1021.38         2.62        99.74          99.74            2       1021.13
UNDOTBS1                                 80        32767        64.75        15.25        80.94          99.95           12            37
SYSAUX                                  400        32767       218.94       181.06        54.73          99.45            1        218.94
SYSTEM                                  500        32767       231.19       268.81        46.24          99.18            3           231

ARROW:(SYS@tiger):PRIMARY> @free.sql
Enter value for 1: *

                                                              Database Freespace Summary

TABLESPACE_NAME                      BLKSZ   DFCT      CT_FRAG MB_FREE_FRAG     MB_FREE     MB_TOTAL PCT_USED MAX_MB_FREE MAX_PCT_USED
----------------------------------- ------ ------ ------------ ------------ ----------- ------------ -------- ----------- ------------
*m s SYSTEM                           8192      1            3          231         231          499    53.71      32,767          .82
*m s UNDOTBS1                         8192      1           12           37          65           79    53.16      32,767          .13
*a s SYSAUX                           8192      1            1          219         219          399    45.13      32,767          .55
*m s USERS                            8192      1            2        1,021       1,021         1023      .18       1,023          .18
                                           ------                           -----------                       -----------
sum                                             4                                 1,536                            99,324

ARROW:(SYS@tiger):PRIMARY>

free.sql

set line 150 echo off verify off
ttitle -
   center  'Database Freespace Summary'  skip 2
break   on report
COMPUTE sum of mb_used on report
COMPUTE sum of mb_free on report
COMPUTE sum of max_mb_free on report
COMPUTE sum of dfct on report
COLUMN mb_used       format 99,999,999
COLUMN mb_free       format 99,999,999
COLUMN max_mb_free   format 99,999,999
COLUMN mb_free_frag  format 99,999,999
COLUMN dfct          format 99999
COLUMN blksz         format 99999
COLUMN pct_used      format 999.99
COLUMN max_pct_used  format 999.99
SELECT
  DECODE(extent_management,'LOCAL','*',' ') ||
  DECODE(segment_space_management,'AUTO','a ','m ') ||
  DECODE(allocation_type,'SYSTEM','s ','u ') ||
  fs.tablespace_name tablespace_name, block_size blksz, dfct,
  fs.nfrag                      ct_frag,
  fs.mxfrag           / 1048576 mb_free_frag,
  fs.free_bytes       / 1048576 mb_free,
  df.avail            / 1048576 mb_total,
  (df.avail-fs.mxfrag)/df.avail*100 pct_used,
  df.max_bytes        / 1048576 max_mb_free,
  (df.avail-fs.mxfrag)/df.max_bytes*100 max_pct_used
FROM dba_tablespaces ts,
  (SELECT tablespace_name, count(*) dfct,
   SUM(decode(maxbytes,0,user_bytes,greatest(maxbytes,user_bytes))) max_bytes,
   SUM(user_bytes) avail
   FROM dba_data_files
   GROUP BY tablespace_name
  ) df,
  (SELECT tablespace_name,  nvl(sum(bytes),0) free_bytes, count(bytes) nfrag, nvl(max(bytes),0) mxfrag
   FROM dba_free_space
   GROUP BY tablespace_name
  ) fs
WHERE fs.tablespace_name = ts.tablespace_name(+)
AND fs.tablespace_name   = df.tablespace_name
AND regexp_like(fs.tablespace_name,'&1','i')
ORDER BY pct_used desc

To apply or not to apply that Cumulative Update (CU)

Pythian Group - Thu, 2016-03-24 12:56

Today the SQL Server Engineering posted an important shift in their recommendations regarding applying Cumulative Updates (often referred to as CUs) on their blog. You can find it here.

About 4 months ago we had an internal debate regarding the best patch strategy and I noticed that our SQL Server DBAs were divided on the best approach.

Some insisted that installing CUs as they were released was the best practice, while others insisted that you should only patch if you needed the Hotfix. I don’t know what spurred it but I saw other discussions on the subject pop up in the community a few days later.

Throughout my career, I’ve been torn on the best strategy myself. I like to keep my systems up to date but I’d always taken the approach that if you needed the hotfix, then with proper testing you should apply a CU. The release of a CU has never been the trigger for me to patch all my SQL Servers in any other occasion – except – when a lot of time had passed between Service Packs which did happen.

I think that strategy of waiting a long time before applying a service pack is a flawed one and don’t recommend it. I don’t think it’s a good idea to be “one release behind” or wait a year. That said, as a career DBA I don’t think I’ll rush out and apply the CU unless it’s fixing something. If I have the cycles, I may test it early on, but I’ll probably wait a month or two and see what the community has to say about it before apply it to production.

I predict that you’ll see Microsoft (and other vendors) move away from large service packs as they move into a more agile approach to their own software. I suspect it won’t belong before we see a formal cancellation of large releases. This is all the more reason for us to ensure we have automation in place for testing and deployment so that the release of an update isn’t a significant topic in our systems-planning meetings.

Do you apply CUs right away or delay? What’s your patch-strategy?

Categories: DBA Blogs

Team Informatics: #springintothecloud Webinar Series

WebCenter Team - Thu, 2016-03-24 12:32
spacer ; background-position-y: 30%;">
Greetings!

As the clocks have just sprung forward and the weather is warming up, so TEAM Informatics would like you to join us as we Spring into the Cloud!
We will be hosting a series of webinars that will each uniquely cover how TEAM's solutions and Oracle's Public Cloud offerings can benefit different areas of your business. The series will offer insights and demonstrations from our WebCenter experts that you won't want to miss! See below for more information and then click the link to register:

Intelligent Hybrid Cloud: March 31st, 2016 - 1PM CST
Do you have WebCenter Content in place, but are in need of cloud collaboration? Increase your business's productivity and collaboration with TEAM's DOCSConnect and Oracle Documents Cloud Service. Take search and find-ability to the next level with TEAM's Intelligent Content auto-classification system and Elasticsearch integration. 
Join this webinar to learn what utilizing a hybrid cloud enterprise model can do for your business! - Register

HR Review Process: April 7th, 2016 - 1PM CST
Are you looking to have HR review processes and workflows in place? Oracle Documents Cloud Service (DOCS) and Process Cloud Service (PCS) offers you a cloud-based collaborative platform to securely share employee documents while streamlining your HR review processes.
This webinar will focus on the benefits of having HR processes and workflows in one place by utilizing Oracle Documents Cloud Service and Process Cloud Service.  - Register

Marketing / Sales Microsites: April 21st, 2016 - 1PM CST
Do you need a portal for your marketing and sales staff to collaborate and share the most up-to-date marketing collateral? See how Oracle Sites Cloud and Documents Cloud Services are the perfect combination to provide a holistic platform for your teams to work together seamlessly

This webinar will provide a live demonstration of the power of Oracle Sites Cloud and Documents Cloud Services working together! - Register




Doug Thompson Volker Schaberg Korey Berg President - CEO General Manager Marketing Director +1 (651) 760-4802 +61.2.9805.0166 + 1 (651) 760-4813
Email Email Email

Big Data SQL 3.0 Expanded Across All Platforms

The unprecedented explosion in data that can be made useful to enterprises – from the Internet of Things, to the social streams of global customer bases – can create tremendous value. However, with...

We share our skills to maximize your revenue!
Categories: DBA Blogs

converting excel to apex

Pat Shuff - Thu, 2016-03-24 11:00
I am trying to go through the exercise of converting an excel spreadsheet into apex and have stumbled across a few interesting tricks and tidbits.

One thing that I have noted is that stuff done in a spreadsheet can be automated via navigation menus in apex. I talk about this in another blog on how to create a navigation system based on parts of a service that you want to get you to the calculation that you need. This is much better if you don't really know what you want and need to be lead through a menu system to help you decide on the service that you are looking for.

To create a calculator for metered and un-metered services in a spreadsheet requires two workbooks. You can tab between the two and enter data into each spreadsheet. If something like a pricelist is entered into a unique spreadsheet, static references and dynamic calculations can be easily. For example, we can create a workbook for archive - metered storage services and a workbook for archive - unmetered services which will be blank since this is not a service that is offered. If we create a third workbook called pricelist, we can enter the pricing for archive services into the pricelist spreadsheet and reference it from the other sheets. For archive cloud services you need to answer four basic questions; how many months, how much you will start archiving, how much you will end up with, and how much do we expect to read back during that period. We should see the following as questions How Many Months?cell F6 Initial Storage Capacitycell F7 Final Storage CapacityCell F8 Retrieval FactorCell F9

The cost will be calculated as Storage Capacity((F8+F7+((F8-F7)/F6))*F6/2*price_of_archive_per_month)/F6((F8+F7+((F8-F7)/F6))*F6/2*price_of_archive_per_month) Retrieval Cost(((F8+F7+((F8-F7)/F6)/2)*(F9/100))*price_of_archive_retrieval/F6(((F8+F7+((F8-F7)/F6)/2)*(F9/100))*price_of_archive_retrieval Outbound Data Transfersumifs(table lookup, table lookup, ...)sumifs(table lookup, table lookup,...*F6 In Apex, this is done a little differently with a sequence of select statements and formatting statements to get the right answer

select 
'   sub-part: ' || PRICELIST.PART_NUMBER ||
    ' - Archive Storage Capacity           ' as Description,
to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2, '$999,990') as PRICE 
 from PRICELIST PRICELIST 
 where PRICELIST.PART_NUMBER = 'B82623'
UNION
select 
'   sub-part: ' || PRICELIST.PART_NUMBER ||
    ' - Archive Retrieval           ' as Description,
to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100), '$999,990') as PRICE 
 from PRICELIST PRICELIST 
 where PRICELIST.PART_NUMBER = 'B82624'
UNION
select 
'   sub-part: ' || PRICELIST.PART_NUMBER ||
    ' - Archive Deletes           ' as Description,
to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:DELETE_ARCHIVE/100), '$999,990') as PRICE 
 from PRICELIST PRICELIST 
 where PRICELIST.PART_NUMBER = 'B82629'
UNION
select 
'   sub-part: ' || PRICELIST.PART_NUMBER ||
    ' - Archive Small Files           ' as Description,
to_char(:SMALL_ARCHIVE, '$999,990') as PRICE 
 from PRICELIST PRICELIST 
 where PRICELIST.PART_NUMBER = 'B82630'
UNION
select 
'   sub-part: ' || PRICELIST.PART_NUMBER ||
    ' - Outbound Data Transfer           ' as Description,
to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100), '$999,990') as PRICE 
 from PRICELIST PRICELIST 
 where PRICELIST.PART_NUMBER = '123456'
UNION
select
'   Total:' as Description,
to_char(sum(price), '$999,990') as Price 
from (
  select   PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2 as price from PRICELIST 
    where pricelist.part_number = 'B82623'
  UNION
  select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100) as price from PRICELIST
    where pricelist.part_number = 'B82624'
  UNION
  select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:DELETE_ARCHIVE/100) as price from PRICELIST
    where pricelist.part_number = 'B82629'
  UNION
  select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100) as price from PRICELIST
    where pricelist.part_number = '123456'
    );
The variables :INITIAL_ARCHIVE replaces F7, :FINAL_ARCHIVE replaces F8, and :RETRIEVE_ARCHIVE replaces F9. Rather than referring to the pricelist spreadsheet, we enter the pricing information into a database and do a select statement with the part_number being the key for the lookup. This allows for a much more dynamic pricebook and allows us to update and add items without risk of breaking the spreadsheet linkages. We can also use REST apis to create and update pricing using an outside program to keep our price calculator up to date and current. Using a spreadsheet allows users to have out of date versions and there really is not any way of communicating to users who have downloaded the spreadsheet that there are updates unless we are all using the same document control system.

Note that we can do running totals by doing a sum from a select ... union statement. This allows us to compare two different services like Amazon Glacier and Oracle Archive easily on the same page. The only thing that we need to add is the cost of Glacier in the database and generate the select statements for each of the Glacier components. We can do this and use a REST api service nightly or weekly to verify the pricing of the services to keep the information up to date.

The select statements that we are use are relatively simple. The difficult part is the calculation and formatting out the output. For the bulk of the select statements we are passing in variables entered into a form and adding or multiplying values to get quantities of objects that cost money. We then look up the price from the database and print out dollar or quantity amounts of what needs to be ordered. The total calculation is probably the most complex because it uses a sum statement that takes the results of a grouping of select statements and reformats it into a dollar or quantity amount.

An example of the interfaces would look like

a traditional spreadsheet

and in Application Express 5.0

Using ICS to integrate SaaS Applications??

Angelo Santagata - Thu, 2016-03-24 09:17

Are you doing SaaS, or EBS,  integrations and using Oracle Integration Cloud Service (ICS)?

Do you need some inspiration? Well this is your lucky day!

Below you'll find a collection of ICS Integration videos , produced by our product managers and our UA development team which go though, step by step, how to integrate  two SaaS applications

 

There are plenty more videos available at the Oracle Help Centre here 

Make sure you comment on the videos in youtube so that the developers can enhance these and any future youtube videos

Angelo 

Oracle Cloud – Updated UI

John Scott - Thu, 2016-03-24 08:49

I was creating a new DBaaS Instance recently and noticed that much of the UI had changed from when I originally blogged about it previously. The differences are quite striking and it goes to show that you shouldn’t base your opinion on something you saw many months ago, always check your assumptions to see if they’ve changed!

So for example, in one of my older running DBaaS instances the home page looks like this

cloud_home.png

Old DBaaS Landing Page

whereas in the new instance it looks like this

cloud_home_new.png

New DBaaS Landing Page

So functionality equivalent, but very much nicer and more modern to look at.

The Database Monitor has been updated quite dramatically too, so for example it had an ‘ugly’ default webserver login page that looked like this

database_monitor.png

Database Monitor – Old Login Screen

and the home page looked like this

dbaas_home.png

Database Monitor – Old Version

the new version has a much cleaner HTML based login

dbaas_monitor_login.png

Database Login – New Login Page

and the home page looks much nicer

dbaas_monitor_new.png

Database Login – New Landing Page

To my eyes this is a much more visually appealing page.

As well as being visually updated, there has also been some functionality added, so for example I can now clone, unplug and plug PDB’s directly from this interface (I’m pretty sure you couldn’t before – although I could be mistaken).

clone.png

I have to say I’m pretty impressed by this update, whilst the visuals in no way change the underlying functionality (i.e. I can still perform most of these actions at the OS, DB level), having a visually pleasing UI gives another way to interact with the service.

Great job Oracle!


Pages

Subscribe to Oracle FAQ aggregator