Feed aggregator

VR Research at OBUG

Oracle AppsLab - Sun, 2016-04-24 14:41

obug

As part of our push to do more international research, I hopped over to Europe to show some customers VR and gather their impressions and thoughts on use cases. This time it was at OBUG, the Oracle Benelux User Group, which was held in Arnhem, a refreshing city along the Rhine.

Given that VR is one of the big technologies of 2016, and is posed to play a major role in the future of user experience, we want to know how our users would like to use VR to help them in their jobs. But first we just need to know what they think about VR after actually using it.

The week prior, Tawny and I showed some VR demos to customers and fellow Oracle employees at Collaborate in Las Vegas, taking them to the arctic to see whales and other denizens of the deep (link) and for the few with some extra time, defusing some bombs in the collaborative game “Keep Talking and Nobody Explodes” (game; Raymond’s blog post from GDC).

The reaction to the underwater scenes are now predictable: pretty much everyone loves it, just some more than others. There’s a sense of wonder, of amazement that the technology has progressed to this point, and that it’s all done with a smartphone. Several people have reached out to try to touch the sea creatures that are swimming by their view, only to realize they’ve been tricked.

Our European customers are no different than the ones we met at Collaborate, with similar ideas of how it could be used in their businesses.

It’s certainly a new technology, and we’ll continue to seek out use cases, while thinking up our own. In the meantime, VR is lots of fun.Possibly Related Posts:

Partition Storage -- 4 : Resizing Partitions

Hemant K Chitale - Sun, 2016-04-24 10:38
Building on Posts 2 (Adding Rows) and 3 (Adding Partitions) where we saw Table Partitions using 8MB Extents ..... is there a way to "resize" Partitions to smaller Extents (and, maybe, lesser space consumed) without using Compression ?

Let's explore.

Beginning with Partitions P_100 and P_200 ....

SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 24576 3
MY_PART_TBL P_200 32768 4
MY_PART_TBL_NDX P_100 28672 43
MY_PART_TBL_NDX P_200 33792 48

SQL>
SQL> alter table my_part_tbl move partition p_100 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_100 storage (initial 64K next 64K)
2 /

Index altered.

SQL> alter table my_part_tbl move partition p_200 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_200 storage (initial 64K next 64K)
2 /

Index altered.

SQL>
SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34

SQL>
SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 3022 1100001
P_200 3668 1100001

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 2482 1100001
P_200 2639 1100001

SQL>
SQL>
SQL> l
1 select partition_name, blocks, count(*)
2 from dba_extents
3 where owner = 'HEMANT'
4 and segment_name = 'MY_PART_TBL'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_100','P_200')
7 group by partition_name, blocks
8* order by 1,2
SQL> /

PARTITION_NA BLOCKS COUNT(*)
------------ ---------- ----------
P_100 8 16
P_100 128 19
P_200 8 16
P_200 128 20

SQL>


Partition P_100 has shrunk from 3 extents of 8MB adding up to 24,576KB to 35 extents adding up to 20,480KB. The High Water Mark has shrink from 3,022 blocks to 2,482 blocks (Remember : P_100 was populated with a Serial Insert.  Partition P_200 that had been populated with Parallel (DoP=4) insert has also shrunk from 32,768KB to 21,504KB and the High Water Mark from 3,668 blocks to 2,639 blocks.  The Extents are a combinaion of 64KB (the first 16, adding up to 1MB) and 1MB sizes.
Even the Index Partitions seem to have shrunk.

So, a MOVE/REBUILD (the REBUILD of the Index Partitons was required because I did a Partition MOVE without UPDATE INDEXES), could be used to shrink the Partitions with newer, smaller, Extents allocated.

But what about the case of SPLIT Partition, where Partitions SPLIT from an 8MB Partition resulted in 2 8MB Partitions, even for empty Partitions.

Here's a workaround.  Before SPLITting the P_MAX Partition, I resize it.

SQL> alter table my_part_tbl move partition p_max storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_max storage (initial 64K next 64K);

Index altered.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (1001)
4 into (partition p_1000, partition p_max)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_1000
3 at (901)
4 into (partition p_900, partition p_1000)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_900
3 at (801)
4 into (partition p_800, partition p_900)
5 /

Table altered.

SQL>
SQL> l
1 select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL>
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL P_300 8192 1
MY_PART_TBL P_400 8192 1
MY_PART_TBL P_600 8192 1
MY_PART_TBL P_680 8192 1
MY_PART_TBL P_700 8192 1
MY_PART_TBL P_800 64 1
MY_PART_TBL P_900 64 1
MY_PART_TBL P_1000 64 1
MY_PART_TBL P_MAX 64 1
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34
MY_PART_TBL_NDX P_300 64 1
MY_PART_TBL_NDX P_400 64 1
MY_PART_TBL_NDX P_600 64 1
MY_PART_TBL_NDX P_680 64 1
MY_PART_TBL_NDX P_700 64 1
MY_PART_TBL_NDX P_800 64 1
MY_PART_TBL_NDX P_900 64 1
MY_PART_TBL_NDX P_1000 64 1
MY_PART_TBL_NDX P_MAX 64 1

22 rows selected.

SQL>


(Note : I have manually relocated Partition P_1000 in the listing).
Partitions P_600, P_680 and P_700 had been created by SPLIT PARTITION commands in the previous post, beginning with segment-created P_MAX partition.  However, after rebuilding P_MAX to 64KB Extents, subsequently SPLITted Partitions (P_800 to P_1000) are also 64KB.

Note : I am not advising that all have to Partitions be 64K.  (Observe how AutoAllocate did allocate 1MB Extents to P_100 and P_200 after the first 1MB of space usage (using 16 64KB Extents).
.
.
.


Categories: DBA Blogs

Video : Flashback Version Query

Tim Hall - Sat, 2016-04-23 10:15

Today’s video gives a quick run through of flashback version query.

If you prefer to read articles, rather than watch videos, you might be interested in these.

The star of today’s video is Tanel Poder. I was filming some other people, he saw something was going on, came across and struck a pose. I figured he knew what I was doing, but it’s pretty obvious from the outtake at the end of the video he was blissfully unaware, but wanted in on the action whatever it was! A true star!

Partition Storage -- 3 : Adding new Range Partitions with SPLIT

Hemant K Chitale - Sat, 2016-04-23 10:04
Building on the Partitioned Table in the previous two blog posts...

We know that the Table is a Range Partitioned Table.  With a MAXVALUE Partition, the only way to add new Partitions is to use the SPLIT PARTITION command.

First, let's review the Table, Partitions and Segments.

SQL> select table_name, num_rows
2 from user_tables
3 where table_name = 'MY_PART_TBL'
4 /

TABLE_NAME NUM_ROWS
---------------- ----------
MY_PART_TBL 2200004

SQL> select partition_name, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1
5 /

PARTITION_NA NUM_ROWS BLOCKS
------------ ---------- ----------
P_100 1100001 3022
P_200 1100001 3668
P_300 1 1006
P_400 1 1006
P_MAX 0 0

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the table has 5 partitions P_100 to P_MAX but only 4 have segments created after one or more rows have been populated.  P_MAX has no segment created for either the Table Partition or the Index Partition.

What happens if we SPLIT P_MAX (an empty, segmentless Partition) to create a new Partition ?

SQL> alter table my_part_tbl
2 split partition p_max
3 at (501)
4 into (partition p_500, partition p_max)
5 /

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, high_value, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE NUM_ROWS BLOCKS
------------ ---------------- ---------- ----------
P_100 101 1100001 3022
P_200 201 1100001 3668
P_300 301 1 1006
P_400 401 1 1006
P_500 501 0 0
P_MAX MAXVALUE 0 0

6 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the process of creating Partition P_500 did not create a segment for it, because P_MAX which it was SPLIT from, was segmentless.  What happens if I split a Partition with 1 or more rows ?

SQL> insert into my_part_tbl
2 select 550, 'Five Hundred Fifty'
3 from dual
4 /

1 row created.

SQL> commit;
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

10 rows selected.

SQL>
SQL> alter table my_part_tbl
2 split partition p_max
3 at (601)
4 into (partition p_600, partition p_max)
5 /

Table altered.

SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

12 rows selected.

SQL>


So, the row for ID_COLUMN=550 created the segment for Partition P_MAX. Subsequently, SPLITting this Partition into P_600 and P_MAX resulted into two Partitions of 8MB each.
The row for ID_COLUMN=550 would be in the P_600 Partition and the P_MAX Partition would now be the empty Partition.  Yet, even P_MAX now takes an 8MB extent, unlike earlier.

Let's try doing such a SPLIT that, say P_700 is created empty but P_MAX inherits the row.

SQL> insert into my_part_tbl
2 select 900, 'Nine Hundred'
3 from dual
4 /

1 row created.

SQL> commit;

Commit complete.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (701)
4 into (partition p_700, partition p_max)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

14 rows selected.

SQL> select count(*) from my_part_tbl partition (P_700);

COUNT(*)
----------
0

SQL>


Again, both Partitions (P_700 and P_MAX) have a segment of 8MB.
This means that, once a Segment for a Partition is created, any SPLIT of that Partition results into two Segments inheriting the same 8MB Extent Size, irrespective of the fact that one of the two may be empty.

SQL> alter table my_part_tbl
2 split partition p_700
3 at (681)
4 into (partition p_680, partition p_700)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


That is confirmation that SPLITting a Partition that has a segment (even if it is empty) results into two segmented partitions, even if both are empty.

Going back to Parttion P_500 (which is present but segmentless), what happens if we split it ?

SQL> alter table my_part_tbl
2 split partition p_500
3 at (451)
4 into (partition p_450, partition p_500)
5 /

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE
------------ ----------------
P_100 101
P_200 201
P_300 301
P_400 401
P_450 451
P_500 501
P_600 601
P_680 681
P_700 701
P_MAX MAXVALUE

10 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


Splitting segmentless Partition P_500 into P_450 and P_500 did *not* result into new Segments.

 This has implications for your SPLIT Partition strategy.  If you need to do a recursive split to create, say, 90 1-day Partitions and you start with a Partition that has a segment (even if empty), you get 90 new segments as well.  Thus, the table would suddenly "grow" by 720MB without having inserted a single row on the day you create these 90 Partitions.  You may get some questions from IT Operations / Support about the sudden "growth" in 1 day.
On the other hand, starting with a segmentess Partition, you get 90 new segmentless Partitions.  Their segments will be created when they are populated.
.
.

.
Categories: DBA Blogs

Fishbowl Hackathon 2016 Summary – Oracle WebCenter Innovations with Slack, Google Vision, and Email

This post comes from Fishbowl’s president, Tim Gruidl. One of Tim’s biggest passions is technology innovation, and not only does he encourage others to innovate, he participates and helps drive this where he can. Tim likes to say “we innovate to help customers dominate”. Tim summarizes Fishbowl’s Hackathon event, held last Friday and Saturday at Fishbowl Solutions, in the post below.

TimWhat an event! I want to start by thanking Andy Weaver and John Sim (Oracle ACE)! Without their passion, drive, leadership and innovation, this event would not be possible.

What a great event to learn, build the team, interact with others and compete. We also created some innovative solutions that I’m sure at some point will be options to help our customers innovate and extend their WebCenter investments. This year, we had 3 teams that designed and coded the following solutions:

  • InSight Image Processing – Greg Bollom and Kim Negaard

They leveraged the Google Vision API to enable the submission of images to Oracle WebCenter and then leveraged Google Vision to pull metadata back and populate fields within the system. They also added the ability to pull in GPS coordinates from photos (taken from cameras, etc.) and have that metadata and EXIF data populate WebCenter Content.

Fishbowl Product Manager, Kim Negaard, discusses the Google Vision API integration with WebCenter

Fishbowl Product Manager, Kim Negaard, discusses the Google Vision API integration with WebCenter.

  • Slack Integation with WebCenter Portal and Content – Andy Weaver, Dan Haugen, Jason Lamon and Jayme Smith

Team collaboration is a key driver for many of our portals, and Slack is one of the most popular collaboration tools. In fact, it is currently valued at $3.6 billion, and there seems to be a rapidly growing market for what they do. The team did some crazy innovation and integration to link Slack to both WebCenter Portal and WebCenter Content. I think the technical learning and sophistication of what they did was probably the most involved and required the most pre-work and effort at the event, and it was so cool to see it actually working.

Team Slack integration presentation.

Team Slack integration presentation.

  • Oracle WebCenter Email NotesJohn Sim (Oracle ACE) Lauren Beatty and me

Valuable corporate content is stored in email, and more value can be obtained from those emails if the content can be tagged and context added in a content management system – Oracle WebCenter. John and Lauren did an awesome job of taking a forwarded email, checking it into WebCenter Content to a workspace, and using related content to build relationships. You can then view the relationships in a graphical way for context. They also created a mobile app to allow you to tag the content on the go and release it for the value of the org.

That's me explaining the email integration with Oracle WebCenter Content.

That’s me explaining the email integration with Oracle WebCenter Content.

Participants voted on the competing solutions, and it ended up being a tie between the Google Insight team and the Email Notes team, but all the solutions truly showed some innovation, sophistication, and completeness of vision. A key aspect of the event for me was how it supported all of Fishbowl’s company values:

Customer First – the solutions we build were based on real-life scenarios our customers have discussed, so this will help us be a better partner for them.

Teamwork – the groups not only worked within their teams, but there was cross team collaboration – Andy Weaver helped John Sim solve an issue he was having, for example.

Intellectual Agility – this goes without saying.

Ambition – people worked late and on the weekend – to learn more, work with the team and have fun.

Continuous Learning – we learned a lot about Slack, cloud, email, etc.

Overall, the annual Hackathon is a unique event that differentiates Fishbowl on so many fronts. From the team building, to the innovation keeping us ahead of the technology curve, to all the learnings – Hackathons truly are a great example of what Fishbowl is all about.

Thanks to all that participated, and remember, let’s continue to innovate so our customers can dominate.

Tim

The post Fishbowl Hackathon 2016 Summary – Oracle WebCenter Innovations with Slack, Google Vision, and Email appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Oracle Utilities Customer Care And Billing 2.5 Benchmark available

Anthony Shorten - Fri, 2016-04-22 15:26

Oracle Utilities Customer Care and Billing v2.5.x marked a major change in application technology as it is an all Java-based architecture.  In past releases, both Java and COBOL were supported. Over the last few releases, COBOL support has been progressively been replaced to optimize the product.

In recently conducted performance benchmark tests, it was demonstrated that the performance of Oracle Utilities Customer Care and Billing v2.5.x, an all java based release, is at least 15 percent better than that of the already high performing Oracle Utilities Customer Care and Billing v2.4.0.2, which included the COBOL-based architecture for key objects, in all use cases tested.

The performance tests simulated a utility with 10 million customers with both versions running the same workloads. In the key use cases tested, Oracle Utilities Customer Care and Billing v2.5.x performed at least 15% faster than the previous release.

Additionally, Oracle Utilities Customer Care and Billing v2.5.x processed 500,000 bills (representing the nightly batch billing for a utility serving 10 million customer accounts being divided into twenty groups, so that 5% of all customers are billed each night on each of the 20 working days during the month) within just 45 minutes.

The improved Oracle Utilities Customer Care and Billing performance ultimately reduces utility staff overtime hours required to oversee batch billing, allows utilities to consolidate tasks on fewer servers and reduce data center size and cost required, and it enables utilities to confidently explore new business processes and revenue sources, such as running billing services to smaller utilities.

A whitepaper is available summarizing the results and details of the architecture used. 

Proud to Work at Pythian, One of Canada’s Top 25 ICT Professional Services Companies

Pythian Group - Fri, 2016-04-22 13:18

It’s only four months into 2016, and there’s a lot to be excited about. In addition to moving Pythian’s global headquarters in Ottawa, Canada to the hip and happening neighbourhood of Westboro, we’ve been receiving accolades for being one of Canada’s top ICT professional services companies, and a great place to work. Following are three reasons to be proud to work at Pythian.

In April Pythian was recognized as one of Canada’s Top 25 Canadian ICT Professional Services Companies on the prestigious Branham300 list. We also appeared on the Top 250 Canadian ICT Companies list for the second year in a row.

The Branham300 is the definitive listing of Canada’s top public and private ICT companies, as ranked by revenues. Not too many people can say that they work at a company that is one of the Top 25 ICT Professional Services Companies in Canada.

In February, our CEO Paul Vallée was named “Diversity Champion of the Year” by Women in Communications and Technology (WCT). In 2015 Pythian launched the Pythia Project, a corporate initiative designed to increase the percentage of talented women who work and thrive at Pythian, especially in tech roles. A new metric called the “Pythia Index” was also introduced. It measures the proportion of people in a business, or in a team, who are women leaders or report to a woman leader. Pythian was also the first Canadian tech company to release its gender stats, and invite other Canadian tech companies to join in the battle against “bro culture”. Stay tuned for more news on the Pythia program in the coming months.

And last, but not least, in March, Pythian was selected as one of Canada’s Top Small & Medium Employers for 2016. This award recognizes small and medium employers with exceptional workplaces and forward-thinking human resource policies. Everyone that works at Pythian is aware of the amazing benefits, but there is a hard working team that really goes the extra mile to make the company a great place to work. Thank you.

Clearly 2016 is off to a fantastic start! I’m looking forward to more good news to share.

Categories: DBA Blogs

How to set up Flashback for MongoDB Testing

Pythian Group - Fri, 2016-04-22 12:52

 

After you’ve upgraded your database to a new version, it’s common that the performance degrades in some cases. To prevent this from happening, we could capture the production database operations and replay them in the testing environment which has the new version installed.

Flashback is a MongoDB benchmark framework that allows developers to gauge database performance by benchmarking queries. Flashback records the real traffic to the database and replays operations with different strategies. The framework is comprised of a set of scripts that fall into 2 categories:

  1. Records the operations(ops) that occur during a stretch of time
  2. Replays the recorded ops
Installation

The framework was tested on Ubuntu 10.04.4 LTS

Prerequisites

-go 1.4

-git 2.3.7

-python 2.6.5

-pymongo 2.7.1

-libpcap0.8 and libpcap0.8-dev

 

  1. Download Parse/Flashback source code

# go get github.com/ParsePlatform/flashback/cmd/flashback

  1. Manually modify the following file to workaround a mongodb-tools compatibility issue

In pass_util.go file:

func GetPass() string {
–    return string(gopass.GetPasswd())
+    if data, errData := gopass.GetPasswd(); errData != nil {
+        return “
+    } else {
+        return string(data)
+    }

 

  1. Compile the go lang part of the tool

# go build -i ./src/github.com/ParsePlatform/flashback/cmd/flashback

 

Configuration

Suppose you have to two shards, Shard a and Shard b. Each shard has 3 nodes. In each shard a, primary is a1. In shard b, primary is b2.

1. copy sample config file for editing

# cp ./src/github.com/ParsePlatform/flashback/record/config.py.example  config.py

2. Change config for testing

DB_CONFIG = {

# Indicates which database(s) to record.

“target_databases”: [“test”],

# Indicates which collections to record. If user wants to capture all the

# collections’ activities, leave this field to be `None` (but we’ll always

# skip collection `system.profile`, even if it has been explicit

# specified).

“target_collections”: [“testrecord”],

 

“oplog_servers”: [

{ “mongodb_uri”: “mongodb://mongodb.a2.com:27018” },

{ “mongodb_uri”: “mongodb://mongodb.b1.com:27018” }

 

],

 

# In most cases you will record from the profile DB on the primary

# If you are also sending queries to secondaries, you may want to specify

# a list of secondary servers in addition to the primary

“profiler_servers”: [

{ “mongodb_uri”: “mongodb://mongodb.a1.com:27018” },

{ “mongodb_uri”: “mongodb://mongodb.b2:27018” }

],

 

“oplog_output_file”: “./testrecord_oplog_output”,

“output_file”: “./testrecord_output”,

 

# If overwrite_output_file is True, the same output file will be

# overwritten is False in between consecutive calls of the recorer. If

# it’s False, the recorder will append a unique number to the end of the

# output_file if the original one already exists.

“overwrite_output_file”: True,

 

# The length for the recording

“duration_secs”: 3600

}

 

APP_CONFIG = {

“logging_level”: logging.DEBUG

}

 

duration_secs indicates the length for the recording. For production capture, should set it at least to 10-12 hrs.

Make sure has write permission to the output dir

Recording
  1. Set all primary servers profiling level to 2

db.setProfilingLevel(2)

2. Start operations recording

./src/github.com/ParsePlatform/flashback/record/record.py

3. The script starts multiple threads to pull the profiling results and oplog entries for collections and databases that we are interested in. Each thread works independently. After fetching the entries, it will merge the results from all sources to get a full picture of all operations as one output file.

4. You can run the record.py from any server as long as the server has flashback installed  and can connect to all mongod servers.

5. As a side note, running mongod in replica set mode is necessary (even when there is only one node), in order to generate and access the oplogs

 

Replay
  1. Run flashback. Style can be “real” or ”stress”

        Real: replay ops in accordance to their original timestamps, which allows us to imitate regular traffic.

        Stress: will preload the ops to the memory and replay them as fast as possible. This potentially limits the number of  ops played back per session to the             available memory on the Replay host.

For sharded collections, point the tool to a mongos. You could also point to a single shard primary for non-sharded collections.

./flashback -ops_filename=”./testrecord_output” -style=”real” -url=”localhost:27018″ -workers=10

Observations
  • Several pymongo (python’s MongoDB driver) arguments in the code are deprecated causing installation and running errors.
  • Need to define a faster restore method (ie. LVM snapshots) to rollback the test environment after each replay.
  • Need to capture execution times for each query included in the test set to be able to detect excecution plan changes.
  • In a sharded cluster, record can be executed from a single server with access to all primaries and/or secondaries.
  • Pulling oplogs from secondaries is recommended if we are looking to reduce load on the primaries.
  • Memory available would dramatically affect operation’s merge process after recording
  • Memory available would also affect replay times (see Tests summary)
Tests summary

 

Record test scenario 1

 

Record server: mongos server (8G RAM)

Time : about 2 hours to finish the recording

Details: Ran record while inserting and updating 1000 documents

 

Record test scenario 2

 

Record server: shard a primary node a1 (80G RAM)

Time: about 2 minutes to finish the recording

Details: Ran record while inserting and updating 1000 documents

Record test scenario 3

 

Record server: shard a primary node a1 (80G RAM)

Time: it took about 20 minutes to finish the recording

Details: Ran record while inserting and updating 100,000 documents

Replay test scenario 1

Replay server: mongos server (8G RAM)

Time: it took about 1 hour to finish the replay

Details: replayed 1000 operations in “real” style

 

Replay test scenario 2

Replay server: shard a primary node a1 (80G RAM)

Time: about 5 minutes to finish the replay

Details: replayed 1000 operations in “real” style

Replay test scenario 3

Replay server: mongos server (8G RAM)

Time: failed due to insufficient memory

Details: replayed 1000 operations in “stress” style

 

Replay test scenario 4

Replay server: shard a primary node a1 (80G RAM)

Time: about 1minute to finish the replay

Details: replayed 1000 operations in “stress” style

 

Replay test scenario 5

Replay server: shard a primary node a1 (80G RAM)

Time: about 20 minutes to finish the replay

Details: replayed 50,000 operations in “stress” style

Categories: DBA Blogs

SQL Saturdays: Learn Where You Can See Presentations by RDX SQL Server Experts

Chris Foot - Fri, 2016-04-22 12:10

SQL Saturdays are free, one-day training events and are held in cities worldwide throughout the year. Hundreds of SQL Server professionals attend to hear presentations by those highly regarded in the field and to learn best practices associated with their trade. Since speaking sessions range from beginner to advanced levels,  professionals of all levels are encouraged to attend. There are also a variety of networking opportunities available such as pre-conference sessions in select cities and after parties. 

Part 1: Running Oracle E-Business Suite on Oracle Cloud

Steven Chan - Fri, 2016-04-22 12:03

[Contributing authors: Terri Noyes]

You can now run Oracle E-Business Suite on Oracle Cloud.  EBS customers can take advantage of rapid environment provisioning, elastic infrastructure that scales up on demand, and a pay-as-you-go model to reduce capital expenditures.

EBS customers have the option of deploying their EBS 12.1.3 and 12.2 environments on the Oracle Compute Cloud Service, and optionally in combination with the Oracle Database Cloud Service (DBCS), or the Exadata Cloud Service.

ebs on cloud

What EBS customers can do on the cloud

  • Provision new instances of E-Business Suite
  • Clone your own E-Business Suite environments to the cloud
  • Deploy development tools for E-Business Suite
  • Customize and extend your cloud-based E-Business Suite environments
  • Migrate those customizations to other cloud-based or on-premise EBS environments
  • Manage cloud and on-premise EBS environments using Oracle Application Management Suite
  • Use your certified third-party products with EBS cloud-based environments

What additional fees are required for EBS customers?

Oracle Cloud resources and services are available for subscription by Oracle customers. Customers who already own Oracle E-Business Suite product licenses may use their Oracle Cloud subscription to provision and deploy Oracle E-Business Suite instances on the Oracle Cloud. The deployed instances may be used for development, testing, training, production, etc. Once a customer acquires a subscription to Oracle Cloud, no additional Oracle E-Business Suite license or usage license is required.

Do I need to purchase EBS product licenses?

Oracle customers who already own Oracle E-Business Suite licenses may use the Oracle Cloud to host instances of their applications. Oracle’s Compute Cloud uses a “Bring your Own License” model, so customers who wish to use the Oracle Compute Cloud must already own a valid license to the software deployed on virtual machines in Oracle Compute Cloud.

Where can I get more details?

Related Articles


Categories: APPS Blogs

Pictures from the good ol’ days

Dan Norris - Fri, 2016-04-22 07:44

My friends from childhood will know my dad. He was likely their high school principal (he was mine too) in a very small town (of about 2500 people on a good day). Those who knew our school may have seen the inside of his office; some were there because they stopped in for a nice visit, others were directed there by upset teachers. In either case, seeing the wall in his office was somewhat overwhelming. At peak, he had 70+ 8×10 photos framed and hanging on his wall. The pictures were of various sports teams and graduating classes from his tenure as principal.

I found those pictures in some old boxes recently. Almost 100% of them were taken by one of our high school math teachers, Jim Mikeworth, who was also a local photographer. Mr. Mike said he was fine with me posting the pictures, so I scanned all of them in and posted them online. If you have a facebook account, you may have already seen them, but if not, they are still accessible without a facebook account. You can find the pictures at https://www.facebook.com/franknorriswall. I hope you enjoy them!

My dad died almost 20 years ago and arguably was one of the most loved men in the history of Villa Grove. He would love for everyone to enjoy this shrine to his office wall of pictures–he was very proud of all the kids that passed through VGHS during his time there (1978-1993, I think).

Have you seen the menu?

Darwin IT - Fri, 2016-04-22 06:49
And did you like it? Hardly possible to miss I think. It kept me nicely busy for a few hours. Got some great examples, and this one is purely based on css and unnumbered lists in combination with anchors. Unfortunately the menu worked with non-classed <ul>, <li> and <a> tags. So embedding the css, caused my other elements to be redefined. (It even redefined the padding of all elements).

But with some trial and error I got it working in a subclassed form. And I like it, do you?

I also found that besides articles, you also can create pages in blogger. Did not know about that, completely overlooked that. I think I try something out, so if you're a regular visitor, you might find that there's work in progress.

The wish for a menu popped up a little while ago, and I kept thinking about it, to be able to get some structure in my articles. From the beginning I tagged every article, but not with a real plan. So I got tags stating 'Oracle BPM Suite', but also 'SOA Suite'. And 'Database', but also 'Database 11g'. Not so straightforward and purposeful.

But a purpose arose. For a longer while I'm thinking about if writing a book would be something for me. I like to write articles on a (ir)regular basis. On this blog you can find a broad range of subjects. But could I do a longer series on a particular subject? And could it lead to a more structured and larger form like a book? I learned from a former co-worker that he had this idea to write articles on a regular basis to buildup a book gradually. And I like that. But what subject would it be? My core focus area is SOA Suite and BPM Suite. But loads of books are written about that. Well, maybe not loads, but at least some recognized, good ones. And even PCS (Process Cloud Service) and ICS (Integration Cloud Service) are (being) covered.

But when Oracle acquired Collaxa in 2004, I worked at Oracle Consulting and got to work with it in the very early days. And I think in the Netherlands at least, I was (one of) the first one(s) from Oracle to provide training on BPEL, at least for Oracle University in the Netherlands. So I got involved in BPEL from the first hour Oracle laid hands on it. Could BPEL be a subject I could cover? Of course I'll not be the first one to cover that. Both on BPEL 1.1 as on 2.0 you can google up a book (is that already a term?), the one on 1.1 I still had stacked in a pile behind another one on my bookshelf.

So let's see where this leads me. You can expect a series on BPEL, in parallel of other articles on subjects that come around during my work. From real novice (do you already use scopes and local variables?), up to some more advanced stuff (how about dynamic partnerlinks; are you already into Correlation Sets, transaction handling, BPEL and Spring? )

It might bleed to death. It might become a nice series and nothing more than that. And it might turn out a real informative stack of articles that could be re-edited into a book. But when I'm at it, turning to cover the more advanced subjects, I plan to pol for what you want to have covered. I think I do know something about BPEL. But as you read with me, maybe you could point me out to subjects I don't know yet. Consider yourself invited to read along.

Introducing Oracle WebLogic Server 12.2.1 Multitenancy: A Q&A Game

Next to our Partner Webcast «Oracle WebLogic Server 12.2.1 Multitenancy and Continuous Availability» delivered earlier this month on the 21st of April 2016, where we've focused on the two new main...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Web technology in APEX Development

Dimitri Gielis - Fri, 2016-04-22 04:54
How did you get started with developing your first APEX app? 

My guess is either you went to https://apex.oracle.com and got a free account or Oracle Application Express was already in your company and somebody told you the url you could connect to. For me that is really the power of APEX, you just go to an url and within minutes you created your first app.

Staying within the APEX framework?

With APEX you create web application, but you don't have to worry about CSS, JavaScript, HTML5, Session State etc. it just comes with the framework. In APEX you have Universal Theme to visually adapt the look and feel of your app, there're Dynamic Actions that do all the JavaScript for you and the framework is generating all the HTML and processing that is necessary.
So although we are creating web applications, at first we are not doing what typical web developers do (creating html, css, javascript files).
Oracle closely looks at all the web technology, makes choices which streams they will follow (e.g. JQuery framework) and implements and tests it so we don't have to worry about a thing.

Going to the next level?

The web is evolving fast, and I mean really fast (!) so maybe you saw something really nice on the web you wish you had in your APEX app, but it's not yet declaratively available... now the nice thing about APEX is that you can extend it yourself by using plugins (see the plugins section on apex.world) or just by writing the code yourself as other web developers do.


Trying new web technology locally

When you want to try those shiny new web things in your APEX app, I recommend trying to get those things working locally first. Last year for example I gave a presentation about Web Components at different Oracle conferences and this year I'll present on Service Workers. All the research I did on those topics where initially not in an APEX context. But how do you get started to try this now?

The first thing you need is a local web server. Depending the OS you're on, you might already have one (e.g. IIS, Apache, ...), if not, here's what I do on OSX.
OSX comes with Python and that allows to create a simple web server.
Open Terminal and go to the directory where you want to test your local files and run:

$ python -m SimpleHTTPServer 8000   (Python 2.7)
$ python3 -m http.server 8000   (Python 3.0)

There're many other ways to have a local web server, see for example this article or a simple web server based on node.js.

The next thing is to start developing your HTML, CSS, JavaScript etc.
To do this development, you probably want some tools; an editor like Sublime or Atom, a CSS and JS preprocessor, Browser extensions, build tools like Gulp etc.
You don't need all those tools, just an editor is fine, but soon enough you want to be more efficient in your development, and tools just help :) Here're some nice articles about different tools: Google Developers - Getting Started, Keenan Payne 13 useful web dev tools and Scott Ge list of web development tools.

Going from custom web development to APEX - use APEX Front-End Boost

So you have your local files developed and next is to integrate them in your APEX app.
You add some code to your APEX pages and upload the files so APEX can see them.
If everything works immediately - great, but most of the time you probably need to make more changes, so you change your local files, test again, upload etc. You could streamline this a bit with setting up a proxy or referencing localhost files while in development... But then you're happy your part of the APEX community...


To ease the above development and integration with APEX, Vincent Morneau and Martin Giffy D'Souza created the excellent APEX Front-End Boost package. The package is using many of the above tools behind the scenes, but it's all integrated in a nice box. This video goes in full detail what the tool is doing for you and how to use it. In short; it fills the bridge of working with a file locally, making it production ready and seeing it immediately in your APEX app :)

In the next post I'll talk about the importance of using https and also setting it up for localhost (also for APEX Front-End Boost).

    Categories: Development

    So how about other cloud providers

    Pat Shuff - Fri, 2016-04-22 02:07
    If you are looking for a cloud hosting provider, the number one question that comes up is which one to use. There are a ton of cloud providers. How do you decide which one is best for you? To be honest, the answer is it depends. It depends on what your problem is and what problem you are trying to solve. Are you trying to solve how you communicate with customers? If so do you purchase something like SalesForce or Oracle Sales Cloud, you get a cloud based sales automation tool. Doing a search on the web yields a ton of references. Unfortunately, you need to know what you are searching for. Are you trying to automate your project management (Oracle Primavera or Microsoft Project)? Every PC magazine and trade publication have opinions on this. Companies like Gartner and Forrester write reviews. Oracle typically does not rate well with any of these vendors for a variety of reasons.

    My recommendation is to look at the problem that you are trying to solve. Are you trying to lower your cost of on-site storage? Look at generic cloud storage. Are you trying to reduce your data center costs and go with a disaster recovery site in the cloud? Look at infrastructure in the cloud and compute in the cloud. I had a chance to play with VMWare VCloud this week and it has interesting features. Unfortunately, it is a really bad generic cloud storage company. You can't allocate 100 TB of storage and access it remotely without going through a compute engine and paying for a processor, operating system, and OS administrator. It is really good if I have VMWare and want to replicate the instances into the cloud or use VMotion to move things to the cloud. Unfortunately, this solution does not work well if I have a Solaris of AIX server running in my data center and want to replicate into the cloud.

    The discussion on replication opens a bigger can of worms. How do you do replication? Do you take database and java files and snap mirror them to the cloud or replicate them as is done inside a data center today? Do you DataGuard the database to a cloud provider and pay on a monthly basis for the database license rather than owning the database? Do you setup a listener to switch between your on-site database and cloud database as a high availability failover? Do you setup a load balancer in front of a web server or Java app server to do the same thing? Do you replicate the visualization files from your VMWare/HyperV/OracleVM/Zen engine to a cloud provider that supports that format? Do you use a GoldenGate or SOA server to physically replicate objects between your on-site and cloud implementation? Do you use something like the Oracle Integration server to synchronize data between cloud providers and your on-premise ERP system?

    Once you decide on what level to do replication/fail over/high availability you need to begin the evaluation of which cloud provider is best for you. Does your cloud provider have a wide range of services that fits the majority of your needs or do you need to get some solutions from one vendor and some from another. Are you ok standardizing on a foundation of a virtualization engine and letting everyone pick and choose their operating system and application of choice? Do you want to standardize at the operating system layer and not care about the way things are virtualized? When you purchase something like SalesForce CRM, do you even know what database or operating system they use or what virtualization engine supports it? Do or should you care? Where do you create your standards and what is most important to you? If you are a health care provider do you really care what operating system that your medical records systems uses or are you more interested in how to import/export ultrasound images into your patients records. Does it really matter which VM or OS is used?

    The final test that you should look at is options. Does your cloud vendor have ways of easily getting data to them and easily getting data out. Both Oracle and Amazon offer tape storage services. Both offer disks that you can ship from your data center to their cloud data centers to load data. Which one offers to ship tapes to you when you want to get them back? Can you only backup from a database in the cloud to storage in the cloud? What does it cost to get your data back once you give it to a cloud provider? What is the outbound charge rate and did you budget enough to even terminate the service without walking away from your data? Do they provide an un-limited read and write service so that you don't get charged for outbound data transfer.

    Picking a choosing a cloud vendor is not easy. It is almost as difficult as buying a house, a car, or a phone carrier. You will never know if you made the right choice until you get penalized for making the wrong choice. Tread carefully and ask the right questions as you start your research.

    Links for 2016-04-21 [del.icio.us]

    Categories: DBA Blogs

    Storage on Azure

    Pat Shuff - Thu, 2016-04-21 02:07
    Yesterday we were out on a limb. Today we are going to be skating on thin ice. Not only do I know less about Azure than AWS but Microsoft has significantly different thoughts and solutions on storage than the other two cloud vendors. First, let's look at the available literature on Azure storage

    There are four types of storage available with Azure storage services; blob storage, table storage, queue storage, and file storage. Blob storage is similar to the Oracle Block Storage or Amazon S3 storage. It provides blocks of pages that can be used for documents, large log files, backups, databases, videos, and so on. Blobs are objects placed inside of containers that have characteristics and access controls. Table storage offers the ability to store key/attribute entries in a semi-structured dataset similar to a NoSQL database. The queue storage provides a messaging system so that you can buffer and sequence events between applications. The third and final is file based storage similar to dropbox or google docs. You can read and write files and file shares and access them through SMB file mounts on Windows systems.

    Azure storage does give you the option of deciding upon your reliability model by selecting the replication model. The options are locally triple redundant storage, replication between two data centers, replication between different geographical locations, or read access geo-redundant storage.

    Since blob storage is probably more relevant for what we are looking for, let's dive a little deeper into this type of storage. Blobs can be allocated either as block blobs or page blobs. Block blobs are aggregation of blocks that can be allocated in different sizes. Page blobs are of smaller fixed size chunks of 512 bytes for each page blob. Page blogs are the foundation of virtual machines and are used by default to support operating systems running in a virtual machine. Blobs are allocated into containers and inherit the characteristics of the container. Blobs are accessed via REST apis. The address of a blob is formatted as http://(account-name).blob.core.windows.net/(container-name)/(blob-name). Note that the account name is defined by the user. It is important to note that the account-name is not unique to your account. This is something that you create and Microsoft adds it to their DNS so that your ip address on the internet can be found. You can't choose simple names like test, testing, my, or other common terms because they have been allocated by someone else.

    To begin the process we need to log into the Azure portal and browser for the Storage create options.

    Once we find the storage management page we have to click the plus button to add a new storage resource.

    It is important to create a unique name. This name will be used as an extension of the REST api and goes in front of the server address. This name must be unique so picking something like the word "test" will fail since someone else has already selected it.

    In our example, we select wwpf which is an abbreviation for a non-profit that I work with, who we play for. We next need to select the replication policy to make sure that the data is highly available.

    Once we are happy with the name, replication policy, resource group, and payment method, we can click Create. It takes a while so we see a deploying message at the top of the screen.

    When we are finished we should see a list of storage containers that we have created. We can dive into the containers and see what services each contains.

    Note that we have the option of blob, table, queue, and files at this point. We will dive into the blob part of this to create raw blocks that can be used for backups, holding images, and generic file storage. Clicking on the blob services allows us to create a blob container.

    Note that the format of the container name is critical. You can't use special characters or capital letters. Make sure that you follow the naming convention for container names.

    We are going to select a blob type container so that we have access to raw blocks.

    When the container is created we can see the REST api point for the newly created storage.

    We can examine the container properties by clicking on the properties button and looking at when it was created, lease information, file count, and other things related to container access rights.

    The easiest way to access this newly created storage is to do the same thing that we did with Oracle Storage. We are going to use the CloudBerry Explorer. In this gui tool we will need to create an attachment to the account. Note that the tool used for Azure is different from the Oracle and Amazon tools. Each cost a little money and they are not the same tool unfortunately. They also only work on a Windows desktop which is challenging if you use a Mac of Linux desktop.

    To figure out your access rights, go to the storage management interface and click on the key at the top right. This should open up a properties screen showing you the account and shared access key.

    From here we can access the Azure blob storage and drag and drop files. We first add the account information then navigate to the blob container and can read and write objects.

    In this example, we are looking at virtual images located on our desktop "E:\" drive and can drag and drop them into a blob container for use by an Azure compute engine.

    In summary, Azure storage is very similar to Amazon S3 and Oracle Storage Cloud Services. The cost is similar. The way we access it is similar. The way we protect and restrict access to it is similar. We can address it through a REST api (which we did not detail) and can access it from our desktop or compute server running in Azure. Overall, storage in the cloud is storage in the cloud. You need to examine your use cases and see which storage type works best for you. Microsoft does have an on-premise gateway product called Azure SimpleStor which is similar to the Amazon Storage Gateway or the Oracle Cloud Storage Appliance. It is more of a hardware solution that attaches via iSCSI to existing servers.

    Learning to answer questions for yourself!

    Tim Hall - Thu, 2016-04-21 01:56

    notes-514998_640It’s not important that you know the answer. It’s important you know how to get the answer!

    I’m pretty sure I’ve written this before, but I am constantly surprised by some of the questions that come my way. Not surprised that people don’t know the answer, but surprised they don’t know how to get the answer. The vast majority of the time someone asks me a question that I can’t answer off the top of my head, this is what I do in this order.

    1. Google their question, often using the subject line of their post or email. A lot of the time, the first couple of links will give me the answer. Sometimes it’s one of my articles that gives me the answer.

    Microsoft Ending Support for Vista in April 2017

    Steven Chan - Wed, 2016-04-20 14:41
    Vista logoMicrosoft is ending support for Windows Vista on April 11, 2017.  The official support dates are published here:

    Windows Vista is certified for desktop clients accessing the E-Business Suite today.  Our general policy is that we support certified third-party products as long as the third-party vendor supports them.  When the third-party vendor retires a product, we consider that to be an historical certification for EBS.

    What can EBS customers expect after April 2017?

    After Microsoft desupports Vista in April 2017:

    • Oracle Support will continue to assist, where possible, in investigating issues that involve Windows Vista.
    • Oracle's ability to assist may be limited due to limited access to PCs running Windows Vista.
    • Oracle will continue to provide access to existing EBS patches for Windows Vista.
    • Oracle will provide new EBS patches only for issues that can be reproduced on later operating system configurations that Microsoft is actively supporting (e.g. Windows 7, Windows 10)

    What should EBS customers do?

    Oracle strongly recommends that E-Business Suite customers upgrade their desktops from Windows Vista to the latest certified equivalents.  As of today, those are Windows 7, 8.1, and 10

    Related Articles

    Categories: APPS Blogs

    Data Encryption at Rest in Oracle MySQL 5.7

    Pythian Group - Wed, 2016-04-20 13:28

     

    I’ve previously evaluated MariaDB’s 10.1 implementation of data encryption at rest (https://www.pythian.com/blog/data-encryption-rest), and recently did the same for Oracle’s implementation (https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html) in their MySQL 5.7.

     

    First, here’s a walkthrough of enabling encryption for MySQL 5.7:

    1. Install keyring plugin.

    1a. Add the following to the [mysqld] section of /etc/my.cnf:

    ...
    early-plugin-load=keyring_file.so

    <script src=”https://gist.github.com/parham-pythian/a625bf472456da4774dec424dbbb4932.js”></script>
    1b. Restart the server:

    ...
    service mysqld restart

    1c. Verify:

    ...
    mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE 'keyring%';
    +--------------+---------------+
    | PLUGIN_NAME  | PLUGIN_STATUS |
    +--------------+---------------+
    | keyring_file | ACTIVE        |
    +--------------+---------------+

    2. Ensure innodb_file_per_table is on.

    2a. Check.

    ...
    mysql> show global variables like 'innodb_file_per_table';
    +-----------------------+-------+
    | Variable_name         | Value |
    +-----------------------+-------+
    | innodb_file_per_table | ON    |
    +-----------------------+-------+

    2b. If OFF, add the following to the [mysqld] section of /etc/my.cnf, restart, and alter each existing table to move it to its own tablespace:

    innodb_file_per_table=ON

    Get list of available InnoDB tables:

    mysql>select table_schema, table_name, engine from information_schema.tables where engine='innodb' and table_schema not in ('information_schema');

    Run ALTER … ENGINE=INNODB on each above InnoDB tables:

    mysql><strong>ALTER</strong> TABLE [TABLE_SCHEMA].[TABLE_NAME] ENGINE=INNODB;

     

    Next, I walked through some testing.

    1. Create some data.

    ...
    [root@localhost ~]# mysqlslap --concurrency=50 --number-int-cols=2 --number-char-cols=3 --auto-generate-sql --auto-generate-sql-write-number=10000 --no-drop

    2. Observe the mysqlslap.t1 table is not automatically encrypted. Unlike MariaDB’s implementation, there is not an option to encrypt tables by default.

    2a. Via the mysql client:

    ...
    mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE CREATE_OPTIONS LIKE '%ENCRYPTION="Y"%';
    Empty set (0.05 sec)

    2b. Via the command line:

    (Install xxd if required.)

    ...
    [root@localhost ~]# yum install vim-common
    ...
    [root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep -v "0000 0000" | less
    ...
    0010dc0: 5967 4b30 7530 7942 4266 664e 6666 3143  YgK0u0yBBffNff1C
    0010dd0: 5175 6470 3332 536e 7647 5761 3654 6365  Qudp32SnvGWa6Tce
    0010de0: 3977 6576 7053 3730 3765 4665 4838 7162  9wevpS707eFeH8qb
    0010df0: 3253 5078 4d6c 6439 3137 6a7a 634a 5465  2SPxMld917jzcJTe
    ...

    3. Insert some identifiable data into the table:

    ...
    mysql> <strong>insert</strong> into mysqlslap.t1 values (1,2,"private","sensitive","data");
    Query OK, 1 row affected (0.01 sec)
    
    mysql> select * from mysqlslap.t1 where charcol2="sensitive";
    +---------+---------+----------+-----------+----------+
    | intcol1 | intcol2 | charcol1 | charcol2  | charcol3 |
    +---------+---------+----------+-----------+----------+
    |       1 |       2 | private  | sensitive | data     |
    +---------+---------+----------+-----------+----------+
    1 row in set (0.02 sec)

    4. Observe this data via the command line:

    ...
    [root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep -v "0000 0000" | less
    ...
    04fa290: 0002 7072 6976 6174 6573 656e 7369 7469  ..privatesensiti
    ...

    5. Encrypt the mysqlslap.t1 table:

    ...
    mysql> <strong>alter</strong> table mysqlslap.t1 encryption='Y';
    Query OK, 10300 rows affected (0.31 sec)
    Records: 10300  Duplicates: 0  Warnings: 0

    6. Observe the mysqlslap.t1 table is now encrypted:

    6a. Via the mysql client:

    ...
    mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE CREATE_OPTIONS LIKE '%ENCRYPTION="Y"%';
    +--------------+------------+----------------+
    | TABLE_SCHEMA | TABLE_NAME | CREATE_OPTIONS |
    +--------------+------------+----------------+
    | mysqlslap    | t1         | ENCRYPTION="Y" |
    +--------------+------------+----------------+

    6b. Via the command line:

    ...
    [root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep "private"
    [root@localhost ~]#

    6c. Observe snippet of the file:

    ...
    [root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep -v "0000 0000" | less
    ...
    0004160: 56e4 2930 bbea 167f 7c82 93b4 2fcf 8cc1  V.)0....|.../...
    0004170: f443 9d6f 2e1e 9ac2 170a 3b7c 8f38 60bf  .C.o......;|.8`.
    0004180: 3c75 2a42 0cc9 a79b 4309 cd83 da74 1b06  &amp;lt;u*B....C....t..
    0004190: 3a32 e104 43c5 8dfd f913 0f69 bda6 5e76  :2..C......i..^v
    ...

    7. Observe redo log is not encrypted:

    ...
    [root@localhost ~]# xxd /var/lib/mysql/ib_logfile0 | less
    ...
    23c6930: 0000 0144 0110 8000 0001 8000 0002 7072  ...D..........pr
    23c6940: 6976 6174 6573 656e 7369 7469 7665 6461  ivatesensitiveda
    23c6950: 7461 3723 0000 132e 2f6d 7973 716c 736c  ta7#..../mysqlsl
    ...

    This is expected because the documentation (https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html) reports encryption of files outside the tablespace is not supported: “Tablespace encryption only applies to data in the tablespace. Data is not encrypted in the redo log, undo log, or binary log.”

    Conclusions

    I found in my testing of MariaDB’s implementation of data encryption at rest that there were still places on the file system that a bad actor could view sensitive data. I’ve found the same in this test of Oracle’s implementation. Both leave data exposed in log files surrounding the tablespace files.

    Bonus

    As a bonus to this walkthrough, during this testing, the table definition caught my eye:

    ...
    mysql> show create table mysqlslap.t1\G
    *************************** 1. row ***************************
           Table: t1
    Create Table: CREATE TABLE `t1` (
      `intcol1` int(32) DEFAULT NULL,
      `intcol2` int(32) DEFAULT NULL,
      `charcol1` varchar(128) DEFAULT NULL,
      `charcol2` varchar(128) DEFAULT NULL,
      `charcol3` varchar(128) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ENCRYPTION='Y'
    1 row in set (0.00 sec)

    As discussed in https://jira.mariadb.org/browse/MDEV-9571, the MariaDB implementation does not include the “encrypted=yes” information in the table definition when tables are implicitly encrypted.

    I was curious what would happen if I did a mysqldump of this encrypted table and attempted to restore it to a nonencrypted server. DBAs expect mysqldump to create a portable file to recreate the table definition and data on a different version of mysql. During upgrades, for example, you might expect to use this for rollback.

    Here is my test. I first did the dump and looked inside the file.

    ...
    [root@localhost ~]# mysqldump mysqlslap t1 > mysqlslap_t1_dump
    [root@localhost ~]# less mysqlslap_t1_dump
    ...
    CREATE TABLE `t1` (
      `intcol1` int(32) DEFAULT NULL,
      `intcol2` int(32) DEFAULT NULL,
      `charcol1` varchar(128) DEFAULT NULL,
      `charcol2` varchar(128) DEFAULT NULL,
      `charcol3` varchar(128) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ENCRYPTION='Y';
    
    <strong>INSERT</strong> INTO `t1` VALUES (
    ...
    ,(1,2,'private','sensitive','data');

    As expected, that definition makes the dump less portable. The restore from dump is not completed and throws an error (this is not remedied by using –force):

    On a slightly older 5.7 version:

    ...
    mysql> select version();
    +-----------+
    | version() |
    +-----------+
    | 5.7.8-rc  |
    +-----------+
    
    [root@centosmysql57 ~]# mysql mysqlslap < mysqlslap_t1_dump
    ERROR 1064 (42000) at line 25: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ENCRYPTION='Y'' at line 7

    On a different fork:

    ...
    MariaDB [(none)]> select version();
    +-----------------+
    | version()       |
    +-----------------+
    | 10.1.12-MariaDB |
    +-----------------+
    1 row in set (0.00 sec)
    
    [root@maria101 ~]# mysql mysqlslap < mysqlslap_t1_dump
    ERROR 1911 (HY000) at line 25: Unknown option 'ENCRYPTION'

    This doesn’t have anything to do with the encrypted state of the data in the table, just the table definition. I do like the encryption showing up in the table definition, for better visibility of encryption. Maybe the fix is to have mysqldump strip this when writing to the dump file.

    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator