Skip navigation.

Feed aggregator

Where is my space on Linux filesystem?

Surachart Opun - Mon, 2014-09-22 05:06
Not Often, I checked about my space after made filesystem on Linux. Today, I have made Ext4 filesystem around 460GB, I found it 437GB only. Some path should be 50GB, but it was available only 47GB.
Thank You @OracleAlchemist and @gokhanatil for good information about it.
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U01LV   50G   52M   47G   1% /u01
Reference  - It's for specify the percentage of the filesystem blocks reserved for the super-user. This avoids fragmentation, and allows root-owned daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the  filesystem. The default percentage is 5%.After I found out more information. Look like we can set it to zero, but we should not set it to zero for /,/var,/tmp or which path has lots of file creates and deletes.Reference on RedHatIf you set the reserved block count to zero, it won't affect
performance much except if you run for long periods of time (with lots
of file creates and deletes) while the filesystem is almost full
(i.e., say above 95%), at which point you'll be subject to
fragmentation problems.  Ext4's multi-block allocator is much more
fragmentation resistant, because it tries much harder to find
contiguous blocks, so even if you don't enable the other ext4
features, you'll see better results simply mounting an ext3 filesystem
using ext4 before the filesystem gets completely full.
If you are just using the filesystem for long-term archive, where
files aren't changing very often (i.e., a huge mp3 or video store), it
obviously won't matter.
- TedExample: Changed reserved-blocks-percentage [root@mytest01 u01]# df -h /u01
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U01LV   50G   52M   47G   1% /u01
[root@mytest01 u01]# tune2fs -m 1 /dev/mapper/VolGroup0-U01LV
tune2fs 1.43-WIP (20-Jun-2013)
Setting reserved blocks percentage to 1% (131072 blocks)
[root@mytest01 u01]# df -h /u01
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U01LV   50G   52M   49G   1% /u01
[root@mytest01 u01]# tune2fs -m 5 /dev/mapper/VolGroup0-U01LV
tune2fs 1.43-WIP (20-Jun-2013)
Setting reserved blocks percentage to 5% (655360 blocks)
[root@mytest01 u01]# df -h /u01
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U01LV   50G   52M   47G   1% /u01Finally, I knew it was reserved for super-user. Checked more for calculation.
[root@ottuatdb01 ~]# df -m /u01
Filesystem                  1M-blocks  Used Available Use% Mounted on
/dev/mapper/VolGroup0-U01LV     50269    52     47657   1% /u01
[root@ottuatdb01 ~]#  tune2fs -l /dev/mapper/VolGroup0-U01LV |egrep  'Block size|Reserved block count'
Reserved block count:     655360
Block size:               4096

Available = 47657MB
Used = 52M
Reserved Space = (655360 x 4096) / 1024 /1024 = 2560MB 
Total = 47657 + 2560 + 52 = 50269 

OK.. I felt good after it cleared for me. Somehow, I believe On Hug space, 5% of the filesystem blocks reserved that's too much. We can reduce it.

Other Links:

Written By: Surachart Opun
Categories: DBA Blogs

Introduction to Oracle BI Cloud Service : Product Overview

Rittman Mead Consulting - Mon, 2014-09-22 05:02

Long-term readers of this blog will probably know that I’m enthusiastic about the possibilities around running OBIEE in the cloud, and over the past few weeks Rittman Mead have been participating in the beta program for release one of Oracle’s Business Intelligence Cloud Service (BICS). BICS went GA over the weekend and is now live on Oracle’s public cloud site, so all of this week we’ll be running a special five-part series on what BI Cloud Service is, how it works and how you go about building a simple application. I’m also presenting on BICS and our beta program experiences at Oracle Openworld this week (Oracle BI in the Cloud: Getting Started, Deployment Scenarios, and Best Practices [CON2659], Monday Sep 29 10:15 AM – 11.00 AM Moscone West 3014), so if you’re at the event and want to hear our thoughts, come along.

Over the next five days I’ll be covering the following topics, and I’ll update the list with hyperlinks once the articles are published:

So what is Oracle BI Cloud Service, and how does it relate to regular, on-premise OBIEE11g?

On the Oracle BI Cloud Service homepage, Oracle position the product as “Agile Business Intelligence in the Cloud for Everyone”, and there’s a couple of key points in this positioning that describe the product well.


The “agile” part is referring to the point that being cloud-based, there’s no on-premise infrastructure to stand-up, so you can get started a lot quicker than if you needed to procure servers, get the infrastructure installed, configure the software and get it accepted by the IT department. Agile also refers to the fact that you don’t need to purchase perpetual or one/two-year term licenses for the software, so you can use OBIEE for more tactical projects without having to worry about expensive long-term license deals. The final way that BICS is “agile” is in the simplified, user-focused tools that you use to build your cloud-based dashboards, with BICS adopting a more consumer-like user interface that in-theory should mean you don’t have to attend a course to use it.

BICS is built around standard OBIEE 11g, with an updated user interface that’ll roll-out across on-premise OBIEE in the next release and the standard Analysis Editor, Dashboard Editor and repository (RPD) under the covers. Your initial OBIEE homepage is a modified version of the standard OBIEE homepage that lists standard developer functions down the left-hand side as a series of menu items, and the BI Administration tool is replaced with an online, thin-client repository editor that provides a subset of the full BI Administration tool functionality.


Customers who license BICS in this initial release get two environments (or instances) to work with; a pre-prod or development environment to create their applications in initially, and a production environment into which they deploy each release of their work. BICS is also bundled with Oracle Database Schema Service, a single-schema Oracle Database service with an ApEx front-end into which you store the data that BICS reports on, and with ApEx and BICS itself having tools to upload data into it; this is, however, the only data source that BICS in version 1 supports, so any data that your cloud-based dashboards report on has to be loaded into Database Schema Service before you can use it, and you have to use Oracle’s provided tools to do this as regular ETL tools won’t connect. We’ll get onto the data provisioning process in the next article in this five-part series.

BICS dashboards and reports currently support a subset of what’s available in the on-premise version. The Analysis Editor (“Answers”) is the same as on-premise OBIEE with the catalog view on the left-hand side, tabs for Results and so on, and the same set of view types (and in fact a new one, for heat maps). There’s currently no access to Agents, Scorecards, BI Publisher or any other Presentation Services features that require a database back-end though, or any Essbase database in the background as you get with on-premise OBIEE


What does become easier to deploy though is Oracle BI Mobile HD as every BICS instance is, by definition, accessible over the internet. Last time I checked the current version of BI Mobile HD on Apple’s App Store couldn’t yet connect, but I’m presuming an update will be out shortly to deal with BICS’s login process, which gets you to enter a BICS username and password along with an “identity domain” that specifics the particular company tenant ID that you use.


I’ll cover the thin-client data modeller later in this series in more detail, but at a high-level what this does is remove the need for you to download and install Oracle BI Administration to set up your BI Repository, something that would have been untenable for Oracle if they were serious about selling a cloud-based BI tool. The thin-client data modeller takes the most important (to casual users) features of BI Administration and makes them available in a browser-based environment, so that you can create simple repository models against a single data source and add features like dimension hierarchies, calculations, row-based and subject-area security using a point-and-click environment.


Features that are excluded in this initial release include the ability to define multiple logical table sources for a logical table, creating multiple business areas, creating calculations using physical (vs. logical) tables and so on, and there’s no way to upload on-premise RPDs to BICS, or download BICS ones to use on-premise, at this stage. What you do get with BICS is a new import and export format called a “BI Archive” which bundles up the RPD, the catalog and the security settings into a single archive file, and which you use to move applications between your two instances and to store backups of what you’ve created.

So what market is BICS aimed at in this initial release, and what can it be used for? I think it’s fair to say that in this initial release, it’s not a drop-in replacement for on-premise OBIEE 11g, with only a subset of the on-premise features initially supported and some fairly major limitations such as only being able to report against a single database source, no access to Agents, BI Publisher, Essbase and so on. But like the first iteration of the iPhone or any consumer version of a previously enterprise-only tool, its trying to do a few things well and aiming at a particular market – in this case, departmental users who want to stand-up an OBIEE environment quickly, maybe only for a limited amount of time, and who are familiar with OBIEE and would like to carry on using it. In some ways its target market is those OBIEE customers who might otherwise have use Qlikview, Tableau or one of the new SaaS BI services such as Good Data, who most probably have some data exports in the form of Excel spreadsheets or CSV documents, want to upload them to a BI service without getting all of IT involved and then share the results in the form of dashboards and reports with their team. Pricing-wise this appears to be who Oracle are aiming the service at (minimum 10 users, $3500/month including 50GB of database storage) and with the product being so close to standard OBIEE functionality in terms of how you use it, it’s most likely to appeal to customers who already use OBIEE 11g in their organisation.

That said, I can see partners and ISVs adopting BICS to deliver cloud-based SaaS BI applications to their customers, either as stand-alone analysis apps or as add-ons to other SaaS apps that need reporting functionality. Oracle BI Cloud Service is part of the wider Oracle Platform-as-a-Service (PaaS) that includes Java (WebLogic), Database, Documents, Compute and Storage, so I can see companies such as ourselves developing reporting applications for the likes of Salesforce, Oracle Sales Cloud and other SaaS apps and then selling them, hosting included, through Oracle’s cloud platform; I’ll cover our initial work in this area, developing a reporting application for data, later in this series.


Of course it’s been possible to deploy OBIEE in the cloud for some while, with this presentation of mine from BIWA 2014 covering the main options; indeed, Rittman Mead host OBIEE instances for customers in Amazon AWS and do most of our development and training in the cloud including our exclusive “ExtremeBI in the Cloud” agile BI service; but BICS has two major advantages for customers looking to cloud-deploy OBIEE:

  • It’s entirely thin-client, with no need for local installs of BI Administration and so forth. There’s also no need to get involved with Enterprise Manager Fusion Middleware Control for adding users to application roles, defining application role mappings and so on
  • You can license it monthly, including data storage. No other on-premise license option lets you do this, with the shortest term license being one year

such that we’ll be offering it as an alternative to AWS hosting for our ExtremeBI product, for customers who in-particular want the monthly license option.

So, an interesting start. As I said, I’ll be covering the detail of how BICS works over the next five days, starting with the data upload and provisioning process in tomorrow’s post – check back tomorrow for the next instalment.

Categories: BI & Warehousing

Extend linux partition on vmware

Surachart Opun - Mon, 2014-09-22 02:24
It was a quiet day, I worked as System Administrator and installed Oracle Linux on Virtual Machine guest. After installed Operating System, I wanted to extend disk on guest. So, I extended disk on guest. Anyway, I came back in my head what I was supposed to do on Linux then ? - Create new disk (and Physical Volume) and then add in Volume Group. my partition:[root@mytest01 ~]# fdisk -l /dev/sda
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             131       78326   628096000   8e  Linux LVM
[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               599.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              153343
  Free PE               0
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlWI thought I should be able to extend (resize) /dev/sda2 - Found out on the Internet, get some example.
- Extend Physical Volume (Chose this idea)
Started to do it: Idea is Deleting/Recreating/run "pvresize".[root@mytest01 ~]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').
Command (m for help): p
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             131       78326   628096000   8e  Linux LVM
Command (m for help): d
Partition number (1-4): 2
Command (m for help): p
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
Partition number (1-4): 2
First cylinder (131-84852, default 131):
Using default value 131
Last cylinder, +cylinders or +size{K,M,G} (131-84852, default 84852):
Using default value 84852
Command (m for help): p
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             131       84852   680524090   83  Linux
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): L
 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris
 1  FAT12           39  Plan 9          82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      3c  PartitionMagic  83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       40  Venix 80286     84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32m 85="" boot="" br="" c7="" extended="" inux="" nbsp="" prep="" yrinx=""> 5  Extended        42  SFS             86  NTFS volume set da  Non-FS data
 6  FAT16           4d  QNX4.x          87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS       4e  QNX4.x 2nd part 88  Linux plaintext de  Dell Utility
 8  AIX             4f  QNX4.x 3rd part 8e  Linux LVM       df  BootIt
 9  AIX bootable    50  OnTrack DM      93  Amoeba          e1  DOS access
 a  OS/2 Boot Manag 51  OnTrack DM6 Aux 94  Amoeba BBT      e3  DOS R/O
 b  W95 FAT32       52  CP/M            9f  BSD/OS          e4  SpeedStor
 c  W95 FAT32 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi eb  BeOS fs
 e  W95 FAT16 (LBA) 54  OnTrackDM6      a5  FreeBSD         ee  GPT
 f  W95 Ext'd (LBA) 55  EZ-Drive        a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            56  Golden Bow      a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    5c  Priam Edisk     a8  Darwin UFS      f1  SpeedStor
12  Compaq diagnost 61  SpeedStor       a9  NetBSD          f4  SpeedStor
14  Hidden FAT16 <3 63="" ab="" arwin="" boot="" br="" f2="" hurd="" nbsp="" or="" secondary="" sys="">16  Hidden FAT16    64  Novell Netware  af  HFS / HFS+      fb  VMware VMFS
17  Hidden HPFS/NTF 65  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE
18  AST SmartSleep  70  DiskSecure Mult b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 75  PC/IX           bb  Boot Wizard hid fe  LANstep
1c  Hidden W95 FAT3 80  Old Minix       be  Solaris boot    ff  BBT
1e  Hidden W95 FAT1
Hex code (type L to list codes): 8e
Changed system type of partition 2 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             131       84852   680524090   8e  Linux LVM
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks. -- I chose to "Reboot" :-) --[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               599.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              153343
  Free PE               0
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlW
[root@mytest01 ~]# pvresize /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               599.00 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              153343
  Free PE               0
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlW
[root@mytest01 ~]#
[root@mytest01 ~]# reboot
[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               599.00 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              153343
  Free PE               0
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlW
[root@mytest01 ~]# pvresize  /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               649.00 GiB / not usable 1.31 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              166143
  Free PE               12800
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlWNote: This case I had 2 partitions (/dev/sda1, /dev/sda2). So, it was a good idea extending Physical Disk. However, I thought creating physical volume and adding in Volume Group, that might be safer. 
Finally, I had VolGroup0 with new size, then extended Logical Volume.[root@mytest01 ~]# df -h /u02
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U02LV  460G   70M  437G   1% /u02
[root@mytest01 ~]# lvdisplay /dev/mapper/VolGroup0-U02LV
  --- Logical volume ---
  LV Path                /dev/VolGroup0/U02LV
  LV Name                U02LV
  VG Name                VolGroup0
  LV UUID                8Gdt6C-ZXQe-dPYi-21yj-Fs0i-6uvE-vzrCbc
  LV Write Access        read/write
  LV Creation host, time, 2014-09-21 16:43:50 -0400
  LV Status              available
  # open                 1
  LV Size                467.00 GiB
  Current LE             119551
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2

[root@mytest01 ~]#
[root@mytest01 ~]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               649.00 GiB
  PE Size               4.00 MiB
  Total PE              166143
  Alloc PE / Size       153343 / 599.00 GiB
  Free  PE / Size       12800 / 50.00 GiB
  VG UUID               thGxdJ-pCi2-18S0-mrZc-cCJM-2SH2-JRpfQ5
[root@mytest01 ~]#
[root@mytest01 ~]# -- Should use "e2fsck" in case resize (shrink). This case no need.
[root@mytest01 ~]# e2fsck -f  /dev/mapper/VolGroup0-U02LV 
e2fsck 1.43-WIP (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/VolGroup0-U02LV: 11/30605312 files (0.0% non-contiguous), 1971528/122420224 blocks
[root@mytest01 ~]#
[root@mytest01 ~]# pvscan
  PV /dev/sda2   VG VolGroup0   lvm2 [649.00 GiB / 50.00 GiB free]
  Total: 1 [649.00 GiB] / in use: 1 [649.00 GiB] / in no VG: 0 [0   ]
[root@mytest01 ~]#
[root@mytest01 ~]#
[root@mytest01 ~]# lvextend -L +50G /dev/mapper/VolGroup0-U02LV
  Extending logical volume U02LV to 517.00 GiB
  Logical volume U02LV successfully resized
[root@mytest01 ~]#
[root@mytest01 ~]#  resize2fs /dev/mapper/VolGroup0-U02LV
resize2fs 1.43-WIP (20-Jun-2013)
Resizing the filesystem on /dev/mapper/VolGroup0-U02LV to 135527424 (4k) blocks.
The filesystem on /dev/mapper/VolGroup0-U02LV is now 135527424 blocks long.
[root@mytest01 ~]#
[root@mytest01 ~]#
[root@mytest01 ~]# lvdisplay /dev/mapper/VolGroup0-U02LV
  --- Logical volume ---
  LV Path                /dev/VolGroup0/U02LV
  LV Name                U02LV
  VG Name                VolGroup0
  LV UUID                8Gdt6C-ZXQe-dPYi-21yj-Fs0i-6uvE-vzrCbc
  LV Write Access        read/write
  LV Creation host, time, 2014-09-21 16:43:50 -0400
  LV Status              available
  # open                 0
  LV Size                517.00 GiB
  Current LE             132351
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2
[root@mytest01 ~]#

[root@mytest01 ~]# df -h /u02
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U02LV  509G   70M  483G   1% /u02
Note: resize2fs can use online, If the filesystem is mounted, it  can  be  used  to expand  the size of the mounted filesystem, assuming the kernel supports on-line resizing.  (As of this writing, the Linux 2.6 kernel supports on-line resize for filesystems mounted using ext3 and ext4.).
Look like today, I learned too much about linux partitioning. Written By: Surachart Opun
Categories: DBA Blogs

The SQL Server DBA's essential toolkit list

Yann Neuhaus - Mon, 2014-09-22 02:01

This week, I attended the SQLSaturday 2014 in Paris. During the Pre-Conference on Thursday, I followed Isabelle Van Campenhoudt for her SQL Server Performances Audit session. This conference took the form of an experience sharing between attendees. Indeed, we tried to list together the most important software, tools, features or scripts which will help an SQL Server DBA during his work. In this blog, I want to share our final list with you.


Windows Server Level: Hardware & Applications


CrystalDiskMark is a free disk benchmark software. It can be downloaded here.



SQLIO is another free disk benchmark software. It can be downloaded here.


Windows Performance Monitor (PerfMon)

PerfMon is a Windows native tool which collects log data in real time in order to examine how programs running on the computer affect the performance.

PerfMon provides a lot of counters which measure the system state or the activity.

You can learn more on TechNet.

You can find the most important counters for SQL Server here.


Performance Analysis of Logs (PAL)

PAL is an Open Source tool based on the top of PerfMon. It reads and analyses the main counters looking for known thresholds.

PAL generates an HTML report which alerts when thresholds are reached.

PAL tool can be downloaded on CodePlex.


Microsoft Assessment and Planning (MAP)

MAP is a Microsoft toolkit which provides hardware and software information and recommendations for deployment or migration process for several Microsoft technologies (such as SQL Server or Windows Server).

MAP toolkit can be downloaded on TechNet.


SQL Server Level: Configuration & Tuning


Dynamic Management Views and Functions (DMV)

DMV are native views and functions of SQL Server which returns server state information of a SQL Server instance.

You can learn more on TechNet.


sp_Blitz (from Brent Ozar)

It is a free script which checks SQL Server configuration and highlights common issues.

sp_Blitz can be found on Brent Ozar website.


Glenn Berry's SQL Server Performance

It provides scripts to diagnostic your SQL Server since SQL Server 2005.

These scripts can be downloaded here.


Enterprise Policy Management (EPM) Framework

EPM Framework is based on Policy-Based Management. It is a reporting solution which tracks SQL Server states which do not meet the specified requirements. It works on all instances of SQL Server since SQL Server 2000.

You can learn more on CodePlex.


SQL Server Level: Monitoring & Troubleshooting


SQL Profiler

SQL Profiler is a rich interface integrated in SQL Server, which allows to create and manage traces to monitor and troubleshoot an SQL Server instance.

You can learn more on TechNet.


Data Collector

Data Collector is a SQL Server feature introduced in SQL Server 2008, and available in all versions.

It gathers performance information from multiple instances for performance monitoring and tuning.

You can learn more on TechNet.


Extended Events

Extended Events is a monitoring system integrated in SQL Server. It helps for troubleshooting or identifying a performance problem.

You can learn more on TechNet.


SQL Nexus

SQL Nexus is an Open Source tool that helps you for identifying the root cause of SQL Server performance issues.

It can be downloaded on CodePlex.


SQL Server Level: Maintenance


SQL Server Maintenance Solution

It a set of scripts for running backups, integrity checks, and index statistics maintenance on all editions of Microsoft SQL Server since SQL Server 2005.

This solution can be downloaded on Ola Hallengren's website.




This blog does not pretend to make a complete list of DBA needs, but it tries to cover most parts. You will notice that all softwares are free and recognized by the DBA community as reliable and powerful tools.

I hope this will help you.

For information, you can learn how to use these tools in our SQL Server DBA Essentials workshop.

Data as an asset

DBMS2 - Sun, 2014-09-21 21:49

We all tend to assume that data is a great and glorious asset. How solid is this assumption?

  • Yes, data is one of the most proprietary assets an enterprise can have. Any of the Goldman Sachs big three* — people, capital, and reputation — are easier to lose or imitate than data.
  • In many cases, however, data’s value diminishes quickly.
  • Determining the value derived from owning, analyzing and using data is often tricky — but not always. Examples where data’s value is pretty clear start with:
    • Industries which long have had large data-gathering research budgets, in areas such as clinical trials or seismology.
    • Industries that can calculate the return on mass marketing programs, such as internet advertising or its snail-mail predecessors.

*”Our assets are our people, capital and reputation. If any of these is ever diminished, the last is the most difficult to restore.” I love that motto, even if Goldman Sachs itself eventually stopped living up to it. If nothing else, my own business depends primarily on my reputation and information.

This all raises the idea – if you think data is so valuable, maybe you should get more of it. Areas in which enterprises have made significant and/or successful investments in data acquisition include: 

  • Actual scientific, clinical, seismic, or engineering research.
  • Actual selling of (usually proprietary) data, with the straightforward economic proposition of “Get once, sell to multiple customers more cheaply than they could get it themselves.” Examples start:
    • This is the essence of the stock quote business. And Michael Bloomberg started building his vast fortune by adding additional data to what the then-incumbents could offer, for example by getting fixed-income prices from Cantor Fitzgerald.*
    • Multiple marketing-data businesses operate on this model.
    • Back when there was a small but healthy independent paper newsletter and directory business, its essence was data.
    • And now there are many online data selling efforts, in niches large and small.
  • Internet ad-targeting businesses. Making money from your great ad-targeting technology usually involves access to lots of user-impression and de-anonymization data as well.
  • Aggressive testing by internet businesses, of substantive offers and marketing-display choices alike. At the largest, such as eBay, you’ll rarely see a page that doesn’t have at least one experiment on it. Paper-based direct marketers take a similar approach. Call centers perhaps should follow suit more than they do.
  • Surveys, focus groups, etc. These are commonly expensive and unreliable (and the cheap internet ones commonly irritate people who do business with you). But sometimes they are, or seem to be, the only kind of information available.
  • Free-text data. On the whole I’ve been disappointed by the progress in text analytics. Still — and this overlaps with some previous points — there’s a lot of information in text or narrative form out there for the taking.
    • Internally you might have customer emails, call center notes, warranty reports and a lot more.
    • Externally there’s a lot of social media to mine.

*Sadly, Cantor Fitzgerald later became famous for being hit especially hard on 9/11/2001.

And then there’s my favorite example of all. Several decades ago, especially in the 1990s, supermarkets and mass merchants implemented point-of-sale (POS) systems to track every item sold, and then added loyalty cards through which they bribed their customers to associate their names with their purchases. Casinos followed suit. Airlines of course had loyalty/frequent-flyer programs too, which were heavily related to their marketing, although in that case I think loyalty/rewards were truly the core element, with targeted marketing just being an important secondary benefit. Overall, that’s an awesome example of aggressive data gathering. But here’s the thing, and it’s an example of why I’m confused about the value of data — I wouldn’t exactly say that grocers, mass merchants or airlines have been bastions of economic success. Good data will rarely save a bad business.

Related links

Categories: Other

Documentum upgrade project: D2-Client, facets and xPlore

Yann Neuhaus - Sun, 2014-09-21 19:57

To enhance the search capability we had to configure xPlore to use the new customer attributes as facets and configure D2 to use the default and new facets.

  Configuring xPlore to use facets with the customer attributes
  • Stop the Index Agent and Server
  • Update indexserverconfig.xml by adding the following line (e. g.):




  • Keep only the indexserverconfig.xml file in $DSSEARCH_HOME/config
  • Remove $DSSEARCH_HOME/data/*
  • Start index and agent server
  • Start a full reindexing
  • Once all is indexed, set index to normal mode


Necessary tests

You should do two tests before configuring the D2-Client.


1. On the content server:


java -docbase_name test67 -user_name admin -password xxxx -full_text -facet_names dbi_events


2. On the xPlore server:

  • Check if the new lines have been validated by executing $DSEARCH_HOME/dsearch/xhive/admin/XHAdmin
  • Navigate to xhivedb/root-library/dsearch/data/default
  • Under the Indexes Tab, click the "Add Subpaths" button to open the "Add sub-paths to index" window where you can see in the Path column the added customer attributes


Configure the search in D2-Config
  • Launch D2-Config
  • Select Interface and then the Search sub-menu
  • Tick  "Enable Facets" and enter a value for "Maximun of result by Facet"




Once this is done, you are able to use the facets with the D2-Client.

Improving your SharePoint performance using SQL Server settings (part 2)

Yann Neuhaus - Sun, 2014-09-21 17:36

Last week, I attended the SQLSaturday 2014 in Paris and participated in a session on SQL Server optimization for Sharepoint by Serge Luca. This session tried to list the best pratices and recommendations for Database Administrators in order to increase the SharePoint performance. This blog post is based on this session and is meant as a sequel to my previous post on Improving your SharePoint performance using SQL Server settings (part 1).


SQL Server instance

It is highly recommended to use a dedicated SQL Server instance for a SharePoint farm and to set LATIN1_GENERAL_CI_AS_KS_WS as the instance collation.


Setup Account permissions

You should give the Setup Account the following permissions in your SQL Server instance:

  • securityadmin server role

  • dbcreator server role

  • dbo_owner for databases used by the Setup Account


Alias DNS

It is recommended to use Alias DNS to connect to the SQL Server instance with your SharePoint server. It simplifies the maintenance and makes it easier to move SharePoint databases to another server.


Disk Priority

When you plan to allocate your SharePoint databases accross different databases, you might wonder how to maximize the performance of your system.

This is a possible disk organization (from faster to lower):

  • Tempdb data and transaction log files

  • Content database transaction log files

  • Search database data files (except Admin database)

  • Content database data files


Datafiles policy

You should use several datafiles for Content and Search databases, as follows:

  • distribute equally-sized data files accross separate disks

  • the number of data files should be lower than the number of processors

Multiple data files are not supported for other SharePoint databases.


Content databases size

You should avoid databases bigger than 200 GB. Databases bigger than 4 TB are not supported by Microsoft.



SharePoint is quite abstract for SQL Server DBAs because it requires specific configurations.

As a result, you cannot guess the answer: you have to learn on the subject.

★ Oracle to Unveil Database Cloud Service 2.0 at OpenWorld

Eddie Awad - Sun, 2014-09-21 17:22

Oracle Database Cloud Service

Michael Hickins:

At Oracle OpenWorld 2014, the company will roll out its new Database Cloud Service — a new multi-tenant database-as-a-service offering that will let customers migrate their existing apps and databases to the cloud “with the push of a button,” said Ellison. Data will be compressed ten to one and encrypted for secure and efficient transfer to the cloud, with no reprogramming. “Every single Oracle feature — even our latest high-speed in-memory processing — is included in the Oracle Cloud Database Service,” Ellison said. “Hundreds of thousands of customers and ISVs have been waiting for exactly this. Database is our largest software business and database will be our largest cloud service business.”

If you are attending Oracle OpenWorld this year and you are interested in the cloud (who isn’t nowadays?!) here are a few sessions focused on Database as a Service. I will be attending a few of them too.

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags: ,

Related articles:

3 film non-meme

Greg Pavlik - Sun, 2014-09-21 15:09
Riffing off previous post - was discussing with my wife last evening what we thought the three best "recent" films we had seen were. Here's my list:

1) Jia Zhangke's A Touch of Sin.

Reason: this is a powerful, powerful film that explores the effects of radical individualism, and economic inequality and of the overturning of normal, local, rooted communities. Banned by the Chinese government, it is as much a critique of the values of neoliberalism globally as it is of the current Chinese economic experiment.

2) Alejandro González Iñárritu's Biutiful.

Reason: a moving exploration of responsibility and ethics in the face of poverty, hopelessness and impending death. What do we make of the human spirit and our obligations to each other - and our obligations in the face of The Other?  Javier Bardem was birthed for this role - fantastic acting.

3) Pavel Lungin's The Island.

Reason: who is guilty before whom and for what? Take a director of Jewish background, give him a story that is loosely inspired by a hagiography of the fool-for-Christ Feofil of the Kieven Caves, and cast a retired-rock-star-current-recluse (Pyotr Mamonov) as a Orthodox monastic in the far north of Russia, and I would have quite low expectations for the outcome. What Lungin produced is instead not only his best film but I think one of the best films of the last 20 years.

This is not my kingdom

FeuerThoughts - Sun, 2014-09-21 14:27
I don't know most people in Chicago on an individual basis, but of all the people I don't know, my favorite Chicagoans are scavengers. They roam the alleys in beat up pickup trucks, with various kinds of makeshift walls extended above the bed.
They grab anything made of metal and anything with the possibility of value. They reduce the amount of garbage going to landfills and I thank them very much for doing this.
Driving the other day, passed one such truck with a hand-lettered sign nailed to the wooden side wall. It said:
This is not my kingdom.Just passing through.

Categories: Development

Partitioned Clusters

Jonathan Lewis - Sun, 2014-09-21 12:28

In case you hadn’t noticed it, partitioning has finally reached clusters in 12c – specifically They’re limited to hash clusters with range partitioning, but it may be enough to encourage more people to use the technology. Here’s a simple example of the syntax:

create cluster pt_hash_cluster (
        id              number(8,0),
        d_date          date,
        small_vc        varchar2(8),
        padding         varchar2(100)
-- single table
hashkeys 10000
hash is id
size 700
partition by range (d_date) (
        partition p2011Jan values less than (to_date('01-Feb-2011','dd-mon-yyyy')),
        partition p2011Feb values less than (to_date('01-Mar-2011','dd-mon-yyyy')),
        partition p2011Mar values less than (to_date('01-Apr-2011','dd-mon-yyyy'))

I’ve been waiting for them to appear ever since and the TPC-C benchmark that Oracle did with them – they’ve been a long time coming (check the partition dates – that gives you some idea of when I wrote this example).

Just to add choice (a.k.a. confusion) has also introduce attribute clustering so you can cluster data in single tables without creating clusters – but only while doing direct path loads or table moves. The performance intent is similar, though the technology and circumstances of use are different.

On False Binaries, Walled Gardens, and Moneyball

Michael Feldstein - Sat, 2014-09-20 10:08

D’Arcy Norman started a lively inter-blog conversation like we haven’t seen in the edublogosphere in quite a while with his post on the false binary between LMS and open. His main point is that, even if you think that the open web provides a better learning environment, an LMS provides a better-than-nothing learning environment for faculty who can’t or won’t go through the work of using open web tools, and in some cases may be perfectly adequate for the educational need at hand. The institution has an obligation to provide the least-common-denominator tool set in order to help raise the baseline, and the LMS is it. This provoked a number of responses, but I want to focus on Phil’s two responses, which talk at a conceptual level about building a bridge between the “walled garden” of the LMS and the open web (or, to draw on his analogy, keeping the garden but removing the walls that demarcate its border). There are some interesting implications from this line of reasoning that could be explored. What would be the most likely path for this interoperability to develop? What role would the LMS play when the change is complete? For that matter, what would the whole ecosystem look like?

Seemingly separately from this discussion, we have the new Unizin coalition. Every time that Phil or I write a post on the topic, the most common response we get is, “Uh…yeah, I still don’t get it. Tell me again what the point of Unizin is, please?” The truth is that the Unizin coalition is still holding its cards close to its vest. I suspect there are details of the deals being discussed in back rooms that are crucial to understanding why universities are potentially interested. That said, we do know a couple of broad, high-level ambitions that the Unizin leadership has discussed publicly. One of those is to advance the state of learning analytics. Colorado State University’s VP of Information Technology Pat Burns has frequently talked about “educational Moneyball” in the context of Unizin’s value proposition. And having spoken with a number of stakeholders at Unizin-curious schools, it is fair to say that there is a high level of frustration with the current state of play in commercial learning analytics offerings that is driving some of the interest. But the dots have not been connected for us. What is the most feasible path for advancing the state of learning analytics? And how could Unizin help in this regard?

It turns out that the walled garden questions and the learning analytics questions are related.

The Current State of Interoperability

Right now, our LMS gardens still have walls and very few doors, but they do have windows, thanks to the IMS LTI standard. You can do a few things with LTI, including the following:

  • Send a student from the LMS to someplace elsewhere on the web with single sign-on
  • Bring that “elsewhere” place inside the LMS experience by putting it in an iframe (again, with single sign-on)
  • Send assessment results (if there are any) back from that “elsewhere” to the LMS gradebook.

The first use case for LTI was to bring in a third-party tool (like a web conferencing app or a subject-specific test engine) into the LMS, making it feel like a native tool. The second use case was to send students out to a tool that needed to full control of the screen real estate (like an eBook reader or an immersive learning environment) but to make that process easier for students (through single sign-on) and teachers (through grade return). This is nice, as far as it goes, but it has some significant limitations. From a user experience perspective, it still privileges the LMS as “home base.” As D’Arcy points out, that’s fine for some uses and less fine for others. Further, when you go from the LMS to an LTI tool and back, there’s very little information shared between the tool. For example, you can use LTI to send a student from the LMS to a WordPress multiuser installation, have WordPress register that student and sign that student in, and even provision a new WordPress site for that student. But you can’t have it feed back information on all the student’s posts and comments into a dashboard that combines it with the student’s activity in the LMS and in other LTI tools. Nor can you use LTI to aggregate student posts from their respective WordPress blogs that are related to a specific topic. All of that would have to be coded separately (or, more likely, not done at all). This is less than ideal from both user experience and analytics perspectives.

Enter Uniz…Er…Caliper

There is an IMS standard in development called Caliper that is intended to address this problem (among many others). I have described some of the details of it elsewhere, but for our current purposes the main thing you need to know is that it is based on the same concepts (although not the same technical standards) as the semantic web. What is that? Here’s a high-level explanation from the Man Himself, Mr. Tim Berners-Lee:

Click here to view the embedded video.

The basic idea is that web sites “understand” each other. The LMS would “understand” that a blog provides posts and comments, both of which have authors and tags and categories, and some of which have parent/child relationships with others. Imagine if, during the LTI initial connection, the blog told the LMS about what it is and what it can provide. The LMS could then reply, “Great! I will send you some people who can be ‘authors’, and I will send you some assignments that can be ‘tags.’ Tell me about everything that goes on with my authors and tags.” This would allow instructors to combine blog data with LMS data in their LMS dashboard, start LMS discussion threads off of blog posts, and probably a bunch of other nifty things I haven’t thought of.

But that’s not the only way you could use Caliper. The thing about the semantic web is that it is not hub-and-spoke in design and does not have to have a “center.” It is truly federated. Perhaps the best analogy is to think of your mobile phone. Imagine if students had their own private learning data wallets, the same way that your phone has your contact information, location, and so on. Whenever a learning application—an LMS, a blog, a homework product, whatever—wanted to know something about you, you would get a warning telling you which information the app was asking to access and asking you to approve that access. (Goodbye, FERPA freakouts.) You could then work in those individual apps. You could authorize apps to share information with each other. And you would have your own personal notification center that would aggregate activity alerts from those apps. That notification center could become the primary interface for your learning activities across all the many apps you use. The PLE prototypes that I have seen basically tried to do a basic subset of this capability set using mostly RSS and a lot of duct tape. Caliper would enable a richer, more flexible version of this with a lot less point-to-point hand coding required. You could, for example, use any Caliper-enabled eBook reader that you choose on any device that you choose to do your course-related reading. You could choose to share your annotations with other people in the class and have their annotations appear in your reader. You could share information about what you’ve read and when you’ve read it (or not) with the instructor or with a FitBit-style analytics system that helps recommend better study habits. The LMS could remain primary, fade into the background, or go away entirely, based on the individual needs of the class and the students.

Caliper is being marketed as a learning analytics standard, but because it is based on the concepts underlying the semantic web, it is much more than that.

Can Unizin Help?

One of the claims that Unizin stakeholders make is that the coalition can can accelerate the arrival of useful learning analytics. We have very few specifics to back up this claim so far, but there are occasionally revealing tidbits. For example, University of Wisconsin CIO Bruce Mass wrote, “…IMS Global is already working with some Unizin institutions on new standards.” I assume he is primarily referring to Caliper, since it is the only new learning analytics standard that I know of at the IMS. His characterization is misleading, since it suggests a peer-to-peer relationship between the Unizin institutions and IMS. That is not what is happening. Some Unizin institutions are working in IMS on Caliper, by which I mean that they are participating in the working group. I do not mean to slight or denigrate their contributions. I know some of these folks. They are good smart people, and I have no doubt that they are good contributors. But the IMS is leading the standards development process, and the Unizin institutions are participating side-by-side with other institutions and with vendors in that process.

Can Unizin help accelerate the process? Yes they can, in the same ways that other participants in the working group can. They can contribute representatives to the working groups, and those representatives can suggest use cases. They can review documents. They can write documents. They can implement working prototypes or push their vendors to do so. The latter is probably the biggest thing that anyone can do to move a standard forward. Sitting around a table and thinking about the standard is good and useful, but it’s not a real standard until multiple parties implement it. It’s pretty common for vendors to tell their customers, “Oh yes, of course we will implement Caliper, just as soon as the specification is finalized,” while failing to mention that the specification cannot be finalized until there are implementers. What you end up with is a bunch of kids standing around the pool, each waiting for somebody else to jump in first. In other words, what you end up with is paralysis. If Unizin can accelerate the rate of implementation and testing of the proposed specification by either implementing themselves or pushing their vendor(s) to implement, then they can accelerate the development of real market solutions for learning analytics. And once those solutions exist, then Unizin institutions (along with everyone else) can use them and try to discover how to use all that data to actually improve learning. These are not unique and earth-shaking contributions that only Unizin could make, but they are real and important ones. I hope that they make them.

The post On False Binaries, Walled Gardens, and Moneyball appeared first on e-Literate.

JDeveloper 12c ADF View Token Performance Improvement

Andrejus Baranovski - Sat, 2014-09-20 05:37
There is known limitation in ADF 11g, related to accessing application in the same session from multiple browser tabs. While working with multiple browser tabs, eventually user is going to consume all view tokens, he will get timeout error once he returns back to the previous browser tab. Unused browser tab is producing timeout, because ADF 11g is sharing the same cache of view tokens for all browser tabs in the same session. This means the recent mostly used browser tab is going to consume all view tokens, other browser tab would loose the last token and screen state will be reset. This behaviour is greatly improved in ADF 12c with separate view token cache supported per each browser tab. If your application is designed to allow user access through multiple browser tabs in the same session, you should upgrade to ADF 12c for better performance.

I'm going to post results of a test with 11g and 12c. Firstly I'm going to present ADF 11g case and then ADF 12c.

ADF 11g view token usage:

Sample application contains one regular button, with PartialSubmit=false, to generate new view token on every submit:

Max Tokens parameter in web.xml is set to 2, to simulate token usage:

To see the debug output for view tokens usage on runtime, you should set special parameter - org.apache.myfaces.trinidadinternal.DEBUG_TOKEN_CACHE=true

On runtime try to open two browser tabs. You are going to see two view tokens consumed and reported in the log:

Press Test View Token button in the second tab, this would consume another view token. Remember, we have set maximum two tokens, no in the log it says - removing/adding token. This means, we have consumes both available tokens (for both tabs) and token from the first tab is replaced:

Go back to the first browser tab, try to press the same Test View Token button and you are going to get time out error - view token is lost and you need to reload the tab:

ADF 12c view token usage:

Sample application in the same way as in 11g, also implemented simple button set with PartialSubmit=false. This would force to use new view token on each submit:

Max Tokens parameter in web.xml, again is set to 2:

Two browser tabs are opened and two view tokens are consumed:

Press Test View Token in second browser tab, you are not going to see in the log information about removing/adding token (differently to 11g). This means, view token from the first browser tab still remains in the cache, second browser tab maintains its own view token cache:

Go back to the first browser tab, press Test View Token button - application works fine, no time out error as it was in 11g:

Download sample application ADF 11g and 12c examples -

Focus on Oracle Social Network at OpenWorld

David Haimes - Fri, 2014-09-19 22:49

This is the first of a series of posts I am planning leading up to Oracle OpenWorld which starts in less than a week.  I have a few different focus areas this year, so I’ll write a little about each of them.

I’ve been talking about collaboration in ERP for quite some time and was also very flattered to have TheAppsLab and Ulan for the UX team cover what we have done in their blogs too.  I call it Socializing the Finance Department, it isn’t about more Pot Luck Lunches and after work drinks, it is about using social tools in a secure and efficient manner, embedded in your ERP system, tied to your transactions and business flows to make you more productive.

The Oracle Social Network(OSN) is part of the infrastructure we build our cloud applications on, so it is pervasive in our cloud apps.  There are a lot of good sessions, see here for the complete OSN list.  I will be on a panel discussing the best use cases for social in enterprise applications, Tuesday September 30th 5pm – 5:45pm  - Moscone West – 3022, full details here.

We won’t be doing a demo, but here is one video to give you a taste of what we will discuss, or check out my post Can chatting make us more productive? for another video.  TO be honest, if you catch during the #oow week, I’m usually happy to show this off, so feel free to ask me.

Categories: APPS Blogs

Oracle OpenWorld 2014 – Bloggers Meetup

Pythian Group - Fri, 2014-09-19 15:35

Oracle OpenWorld Bloggers Meetup Guess what? You all know that it’s coming, when it’s coming and where… That’s right! The Annual Oracle Bloggers Meetup, one of your top favourite events of OpenWorld, is happening at usual place and time.

What: Oracle Bloggers Meetup 2014

When: Wed, 1-Oct-2014, 5:30pm

Where: Main Dining Room, Jillian’s Billiards @ Metreon, 101 Fourth Street, San Francisco, CA 94103 (street view). Please comment with “COUNT ME IN” if coming — we need to know the attendance numbers.

Traditionally, Oracle Technology Network and Pythian sponsor the venue and drinks. We will also have some cool things happening and a few prizes.

In the age of Big Data and Internet of Things, our mingling activity this year will be virtual — using an app we wrote specifically for this event, so bring your iStuff and Androids to participate and win. Hope this will work! :)

As usual, vintage t-shirts, ties, or bandanas from previous meetups will make you look cool — feel free to wear them.

For those of you who don’t know the history: The Bloggers Meetup during Oracle OpenWorld was started by Mark Rittman and continued by Eddie Awad, and then I picked up the flag in 2009 (gosh…  6 years already?) The meetups have been a great success for making new friends and catching up with old, so let’s keep them this way! To give you an idea, here are the photos from the OOW08 Bloggers Meetup (courtesy of Eddie Awad) and OOW09 meetup blog post update from myself, and a super cool video by a good blogging friend, Bjorn Roest from OOW13.

While the initial meetings were mostly targeted to Oracle database folks, guys and gals from many Oracle technologies — Oracle database, MySQL, Apps, Sun technologies, Java and more join in the fun. All bloggers are welcome. We estimate to gather around 150 bloggers.

If you are planning to attend, please comment here with the phrase “COUNT ME IN”. This will help us ensure we have the attendance numbers right. Please provide your blog URL with your comment — it’s a Bloggers Meetup after all! Make sure you comment here if you are attending so that we have enough room, food, and (most importantly) drinks.

Of course, do not forget to blog and tweet about this year’s bloggers meetup. See you there!

Categories: DBA Blogs

Best of OTN - Week of September 14th

OTN TechBlog - Fri, 2014-09-19 11:51
Architect Community

The Top 3 most popular videos on the OTN ArchBeat YouTube Channel for the last seven days:

  1. 2 Minute Tech Tip: Vagrant, Puppet, Docker, and Packer
    by Oracle ACE Director Lucas Jellema
  2. 2 Minute Tech Tip: Middleware as a Service
    by Kelly Goestch, Product Director, Oracle Cloud Application Foundation
  3. 2 Minute Tech Tip: Understanding IP Ports
    Oracle ACE Director Simon Haslam

Click here for the entire 2 Minute Tech Tip series.

Friday Fun from OTN Architect Community Director Bob Rhubart:
If the original version of Meghan Trainor's All About That Bass doesn't quite mesh with your musical tastes, mix yourself a nice martini (up, with a lemon twist, please) and cool out with this brilliant, jazzy cover by the marvelous Kate Davis.

Database Community-

w00t! OOW 2014 RAC ATTACK Set Up Instructions- Going to the RAC ATTACK at OOW 2014? We're meeting on Sunday 9/28 starting at 9:00 am at the Oracle Technology Lounge in the Moscone South Lobby. Here's the FIRST, NEXT and FINALLY information you need to be ready to learn more about Oracle 12c and Real Application Clusters.

Java Community-

Java's New Console Management Tool -  Great for sysadmins - Learn More!

Tech Article: Why another MVC? - The recent filing of a new JSR for an MVC 1.0 framework in Java EE 8 [1] calls for some clarification on how that JSR relates to JSF.

Friday Funny from OTN Java Community Manager, Tori Wieldt - What Programmers Say vs. What They Mean

Strategy, Technical and Community Keynotes - Start JavaOne with the Strategy and Technical keynotes to learn about the strategy and roadmaps as well as technical insights. The keynotes will be Sunday, September 28 from 12:45p.m. to 3:00p.m., at Moscone North, Hall D

Systems Community-

Oracle ZS3 Series has the capability to boot directly out of memory 16,000 VMs in under seven minutes. Read more in Steve Zivanic’s interview for the CUBE at VMworld. 

Latest Lab by Orgad: How to Set Up a Hadoop 2 Cluster with Oracle Solaris - How to set up an Apache Hadoop 2 (YARN) cluster using Oracle Solaris Zones, Oracle Solaris ZFS, and Unified Archive. Presented at Oracle OpenWorld, but I'll try to convince him to publish it on OTN at a later date.

Making Sure Your Exadata Is Running Right -  Rick uses another motorcycle analogy and connects it to Oracle. 

OOW - Focus On Support and Services for Server, Storage and Solaris

Chris Warticki - Fri, 2014-09-19 08:00
Focus On Support and Services for Server, Storage and Solaris   Wednesday, Oct 01, 2014

Conference Sessions

Oracle Solaris: Best Practices for Maintenance and Upgrades
Walter Fisch, Director, Solaris & Network Domain, Oracle
Alfred Mayerhofer, Sr Principal Technical Support Engineer, Oracle
10:15 AM - 11:00 AM Intercontinental - Grand Ballroom A CON8312 Sys Admin Best Practices: Maintaining Oracle Server and Storage Systems
Daniel Green, Sr Technical Support Engineer, Oracle
Jeff Nieusma, Senior Principal Engineer, Oracle
3:30 PM - 4:15 PM Intercontinental - Intercontinental C CON8313 Thursday, Oct 02, 2014

Conference Sessions

System Support: Learn How to Automate Service Requests and Improve Resolution Time
Remco Lengers, Sr. Manager Proactive Support, Oracle
Michael Mcdonnell, X86 & ES Automation Lead, Oracle
10:45 AM - 11:30 AM Marriott Marquis - Salon 1/2/3* CON9132    My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

Log Buffer #389, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-09-19 07:23

As the Oracle Open World draws near, bloggers of MySQL and Oracle are getting more excited and productive. SQL Server bloggers are also not far behind. This Blog Edition covers that all.


What’s New With Fast Data at Oracle Open World 2014?

JASPIC improvements in WebLogic 12.1.3 Arjan Tijms.

Larry Ellison Stepping Down as Chief of Oracle.

Mobilizing E-Business Suite with Oracle MAF and FMW at OOW 14.

Oracle ISV Engineering @ Oracle OpenWorld 2014.

SQL Server:

How to create Data Mining Reports using Reporting Services.

Azure Virtual Machines Part 0: A VM Primer.

Stairway to PowerPivot and DAX – Level 7: Function / Iterator Function Pairs: The DAX AVERAGE() and AVERAGEX() Functions.

Free eBook: SQL Server Transaction Log Management.

The Mindset of the Enterprise DBA: Harnessing the Power of Automation.


MySQL 5.6.20 on POWER.

Announcing TokuDB v7.5: Read Free Replication.

Global Transaction ID (GTID) is one of the most compelling new features of MySQL 5.6.

Managing big data? Say ‘hello’ to HP Vertica.

Tweaking MySQL Galera Cluster to handle large databases – open_files_limit.

Categories: DBA Blogs

Shrink Tablespace

Jonathan Lewis - Fri, 2014-09-19 05:10

A recent question on the OTN database forum raised the topic of returning free space in a tablespace to the operating system by rebuilding objects to fill the gaps near the start of files and leave the empty space at the ends of files so that the files could be resized downwards.

This isn’t a process that you’re likely to need frequently, but I have written a couple of notes about it, including a sample query to produce a map of the free and used space in a tablespace. While reading the thread, though, it crossed my mind that recent versions of Oracle introduced a feature that can reduce the amount of work needed to get the job done, so I thought I’d demonstrate the point here.

When you move a table its indexes become unusable and have to be rebuilt; but when an index becomes unusable, the more recent versions of Oracle will drop the segment. Here’s a key point – if the index becomes unusable because the table has been moved the segment is dropped only AFTER the move has completed. Pause a minute for thought and you realise that the smart thing to do before you move a table is to make its indexes unusable so that they release their space BEFORE you move the table. (This strategy is only relevant if you’ve mixed tables and indexes in the same tablespace and if you’re planning to do all your rebuilds into the same tablespace rather than moving everything into a new tablespace.)

Here are some outputs demonstrating the effect in a database. I have created (and loaded) two tables in a tablespace of 1MB uniform extents, 8KB block size; then I’ve created indexes on the two tables. Running my ts_hwm.sql script I get the following results for that tablespace:

------- ----------- ----------- ---------- --------------- ------------------
      5         128         255 TEST_USER  T1              TABLE
                256         383 TEST_USER  T2              TABLE
                384         511 TEST_USER  T1_I1           INDEX
                512         639 TEST_USER  T2_I1           INDEX
                640      65,535 free       free

Notice that it was a nice new tablespace, so I can see the two tables followed by the two indexes at the start of the tablespaces. If I now move table t1 and re-run the script this is what happens:

alter table t1 move;

------- ----------- ----------- ---------- --------------- ------------------
      5         128         255 free       free
                256         383 TEST_USER  T2              TABLE
                384         511 free       free
                512         639 TEST_USER  T2_I1           INDEX
                640         767 TEST_USER  T1              TABLE
                768      65,535 free       free

Table t1 is now situated past the previous tablespace highwater mark and I have two gaps in the tablespace where t1 and the index t1_i1 used to be.

Repeat the experiment from scratch (drop the tables, purge, etc. to empty the tablespace) but this time mark the index unusable before moving the table and this is what happens:

------- ----------- ----------- ---------- --------------- ------------------
      5         128         255 free       free
                256         383 TEST_USER  T2              TABLE
                384         511 TEST_USER  T1              TABLE
                512         639 TEST_USER  T2_I1           INDEX
                640      65,535 free       free

Table t1 has moved into the space vacated by index t1_i1, so the tablespace highwater mark has not moved up.

If you do feel the need to reclaim space from a tablespace by rebuilding objects, you can find that it’s quite hard to decide the order in which the objects should be moved/rebuilt to minimise the work you (or rather, Oracle) has to do; if you remember that any table you move will release its index space anyway and insert a step to mark those indexes unusable before you move the table you may find it’s much easier to work out a good order for moving the tables.

Footnote: I appreciate that some readers may already take advantage of the necessity of rebuilding indexes by dropping indexes before moving tables – but I think it’s a nice feature that we can now make them unusable and get the same benefit without introducing a risk of error when using a script to recreate an index we’ve dropped.


Switch CentOS to Oracle Linux -

Surachart Opun - Fri, 2014-09-19 04:15
My time has used much with Linux. Some people asked to move from CentOS to Oracle Linux somehow. I used to believe it easy to do like that. Anyway, It'd better to test. I focused on 2 links.

Oracle introduces script that can convert CentOS 5 and 6 systems to Oracle Linux. After that run "yum upgrade" again.
[root@test-centos ~]# uname -r
[root@test-centos ~]# cat /etc/centos-release
CentOS release 6.5 (Final)
[root@test-centos ~]# curl -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6523  100  6523    0     0   3453      0  0:00:01  0:00:01 --:--:-- 17534
[root@test-centos ~]# sh
Checking for required packages...
Checking your distribution...
Looking for yumdownloader...
Finding your repository directory...
Downloading Oracle Linux yum repository file...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4233  100  4233    0     0   3507      0  0:00:01  0:00:01 --:--:--  4724
Removing unsupported packages...
Loaded plugins: fastestmirror, security
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package libreport-plugin-rhtsupport.x86_64 0:2.0.9-19.el6.centos will be erased
--> Processing Dependency: libreport-plugin-rhtsupport = 2.0.9-19.el6.centos for package: libreport-compat-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: libreport-plugin-rhtsupport for package: abrt-cli-2.0.8-21.el6.centos.x86_64
--> Processing Dependency: libreport-plugin-rhtsupport = 2.0.9-19.el6.centos for package: libreport-python-2.0.9-19.el6.centos.x86_64
--> Running transaction check
---> Package abrt-cli.x86_64 0:2.0.8-21.el6.centos will be erased
---> Package libreport-compat.x86_64 0:2.0.9-19.el6.centos will be erased
--> Processing Dependency: libreport-compat = 2.0.9-19.el6.centos for package: libreport-2.0.9-19.el6.centos.x86_64
---> Package libreport-python.x86_64 0:2.0.9-19.el6.centos will be erased
--> Running transaction check
---> Package libreport.x86_64 0:2.0.9-19.el6.centos will be erased
--> Processing Dependency: for package: abrt-2.0.8-21.el6.centos.x86_64
--> Processing Dependency: for package: libreport-plugin-kerneloops-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: for package: libreport-plugin-reportuploader-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: for package: libreport-plugin-logger-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: for package: libreport-plugin-kerneloops-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: for package: abrt-libs-2.0.8-21.el6.centos.x86_64
--> Processing Dependency: for package: abrt-addon-python-2.0.8-21.el6.centos.x86_64
--> Processing Dependency: for package: libreport-cli-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: for package: abrt-2.0.8-21.el6.centos.x86_64
--> Processing Dependency: for package: abrt-tui-2.0.8-21.el6.centos.x86_64
--> Processing Dependency: for package: abrt-addon-ccpp-2.0.8-21.el6.centos.x86_64
--> Processing Dependency: for package: libreport-plugin-mailx-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: for package: libreport-plugin-reportuploader-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: for package: abrt-addon-kerneloops-2.0.8-21.el6.centos.x86_64
--> Processing Dependency: libreport = 2.0.9-19.el6.centos for package: libreport-plugin-logger-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: libreport = 2.0.9-19.el6.centos for package: libreport-plugin-kerneloops-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: libreport = 2.0.9-19.el6.centos for package: libreport-cli-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: libreport >= 2.0.9-16 for package: abrt-2.0.8-21.el6.centos.x86_64
--> Processing Dependency: libreport = 2.0.9-19.el6.centos for package: libreport-plugin-mailx-2.0.9-19.el6.centos.x86_64
--> Processing Dependency: libreport = 2.0.9-19.el6.centos for package: libreport-plugin-reportuploader-2.0.9-19.el6.centos.x86_64
--> Running transaction check
---> Package abrt.x86_64 0:2.0.8-21.el6.centos will be erased
---> Package abrt-addon-ccpp.x86_64 0:2.0.8-21.el6.centos will be erased
---> Package abrt-addon-kerneloops.x86_64 0:2.0.8-21.el6.centos will be erased
---> Package abrt-addon-python.x86_64 0:2.0.8-21.el6.centos will be erased
---> Package abrt-libs.x86_64 0:2.0.8-21.el6.centos will be erased
---> Package abrt-tui.x86_64 0:2.0.8-21.el6.centos will be erased
---> Package libreport-cli.x86_64 0:2.0.9-19.el6.centos will be erased
---> Package libreport-plugin-kerneloops.x86_64 0:2.0.9-19.el6.centos will be erased
---> Package libreport-plugin-logger.x86_64 0:2.0.9-19.el6.centos will be erased
---> Package libreport-plugin-mailx.x86_64 0:2.0.9-19.el6.centos will be erased
---> Package libreport-plugin-reportuploader.x86_64 0:2.0.9-19.el6.centos will be erased
--> Finished Dependency Resolution
ol6_UEK_latest                                                                                                                                   | 1.2 kB     00:00
ol6_UEK_latest/primary                                                                                                                           |  16 MB     00:08
ol6_latest                                                                                                                                       | 1.4 kB     00:00
ol6_latest/primary                                                                                                                               |  41 MB     00:21
Dependencies Resolved
 Package                                        Arch                  Version                             Repository                                               Size
 libreport-plugin-rhtsupport                    x86_64                2.0.9-19.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 74 k
Removing for dependencies:
 abrt                                           x86_64                2.0.8-21.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                706 k
 abrt-addon-ccpp                                x86_64                2.0.8-21.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                189 k
 abrt-addon-kerneloops                          x86_64                2.0.8-21.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 25 k
 abrt-addon-python                              x86_64                2.0.8-21.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 20 k
 abrt-cli                                       x86_64                2.0.8-21.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                0.0
 abrt-libs                                      x86_64                2.0.8-21.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 24 k
 abrt-tui                                       x86_64                2.0.8-21.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 15 k
 libreport                                      x86_64                2.0.9-19.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                1.2 M
 libreport-cli                                  x86_64                2.0.9-19.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 26 k
 libreport-compat                               x86_64                2.0.9-19.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                7.4 k
 libreport-plugin-kerneloops                    x86_64                2.0.9-19.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 18 k
 libreport-plugin-logger                        x86_64                2.0.9-19.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 23 k
 libreport-plugin-mailx                         x86_64                2.0.9-19.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 32 k
 libreport-plugin-reportuploader                x86_64                2.0.9-19.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 32 k
 libreport-python                               x86_64                2.0.9-19.el6.centos                 @anaconda-CentOS-201311272149.x86_64/6.5                 72 k
Transaction Summary
Remove       16 Package(s)
Installed size: 2.4 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Erasing    : abrt-cli-2.0.8-21.el6.centos.x86_64                                                                                                                 1/16
  Erasing    : abrt-addon-kerneloops-2.0.8-21.el6.centos.x86_64                                                                                                    2/16
  Erasing    : abrt-addon-ccpp-2.0.8-21.el6.centos.x86_64                                                                                                          3/16
  Erasing    : abrt-tui-2.0.8-21.el6.centos.x86_64                                                                                                                 4/16
  Erasing    : abrt-addon-python-2.0.8-21.el6.centos.x86_64                                                                                                        5/16
  Erasing    : abrt-2.0.8-21.el6.centos.x86_64                                                                                                                     6/16
  Erasing    : abrt-libs-2.0.8-21.el6.centos.x86_64                                                                                                                7/16
  Erasing    : libreport-plugin-kerneloops-2.0.9-19.el6.centos.x86_64                                                                                              8/16
  Erasing    : libreport-cli-2.0.9-19.el6.centos.x86_64                                                                                                            9/16
  Erasing    : libreport-plugin-logger-2.0.9-19.el6.centos.x86_64                                                                                                 10/16
  Erasing    : libreport-plugin-mailx-2.0.9-19.el6.centos.x86_64                                                                                                  11/16
  Erasing    : libreport-compat-2.0.9-19.el6.centos.x86_64                                                                                                        12/16
  Erasing    : libreport-plugin-reportuploader-2.0.9-19.el6.centos.x86_64                                                                                         13/16
  Erasing    : libreport-plugin-rhtsupport-2.0.9-19.el6.centos.x86_64                                                                                             14/16
  Erasing    : libreport-python-2.0.9-19.el6.centos.x86_64                                                                                                        15/16
  Erasing    : libreport-2.0.9-19.el6.centos.x86_64                                                                                                               16/16
  Verifying  : libreport-plugin-mailx-2.0.9-19.el6.centos.x86_64                                                                                                   1/16
  Verifying  : libreport-2.0.9-19.el6.centos.x86_64                                                                                                                2/16
  Verifying  : libreport-plugin-logger-2.0.9-19.el6.centos.x86_64                                                                                                  3/16
  Verifying  : abrt-tui-2.0.8-21.el6.centos.x86_64                                                                                                                 4/16
  Verifying  : libreport-plugin-kerneloops-2.0.9-19.el6.centos.x86_64                                                                                              5/16
  Verifying  : libreport-plugin-rhtsupport-2.0.9-19.el6.centos.x86_64                                                                                              6/16
  Verifying  : abrt-addon-kerneloops-2.0.8-21.el6.centos.x86_64                                                                                                    7/16
  Verifying  : libreport-compat-2.0.9-19.el6.centos.x86_64                                                                                                         8/16
  Verifying  : abrt-2.0.8-21.el6.centos.x86_64                                                                                                                     9/16
  Verifying  : abrt-libs-2.0.8-21.el6.centos.x86_64                                                                                                               10/16
  Verifying  : libreport-python-2.0.9-19.el6.centos.x86_64                                                                                                        11/16
  Verifying  : abrt-addon-python-2.0.8-21.el6.centos.x86_64                                                                                                       12/16
  Verifying  : libreport-plugin-reportuploader-2.0.9-19.el6.centos.x86_64                                                                                         13/16
  Verifying  : abrt-cli-2.0.8-21.el6.centos.x86_64                                                                                                                14/16
  Verifying  : libreport-cli-2.0.9-19.el6.centos.x86_64                                                                                                           15/16
  Verifying  : abrt-addon-ccpp-2.0.8-21.el6.centos.x86_64                                                                                                         16/16
  libreport-plugin-rhtsupport.x86_64 0:2.0.9-19.el6.centos
Dependency Removed:
  abrt.x86_64 0:2.0.8-21.el6.centos                   abrt-addon-ccpp.x86_64 0:2.0.8-21.el6.centos                 abrt-addon-kerneloops.x86_64 0:2.0.8-21.el6.centos
  abrt-addon-python.x86_64 0:2.0.8-21.el6.centos      abrt-cli.x86_64 0:2.0.8-21.el6.centos                        abrt-libs.x86_64 0:2.0.8-21.el6.centos
  abrt-tui.x86_64 0:2.0.8-21.el6.centos               libreport.x86_64 0:2.0.9-19.el6.centos                       libreport-cli.x86_64 0:2.0.9-19.el6.centos
  libreport-compat.x86_64 0:2.0.9-19.el6.centos       libreport-plugin-kerneloops.x86_64 0:2.0.9-19.el6.centos     libreport-plugin-logger.x86_64 0:2.0.9-19.el6.centos
  libreport-plugin-mailx.x86_64 0:2.0.9-19.el6.centos libreport-plugin-reportuploader.x86_64 0:2.0.9-19.el6.centos libreport-python.x86_64 0:2.0.9-19.el6.centos
Backing up and removing old repository files...
Downloading Oracle Linux release package...
Loaded plugins: fastestmirror
Determining fastest mirrors
ol6_UEK_latest                                                                                                                                                  351/351
ol6_latest                                                                                                                                                  26103/26103
oraclelinux-release-6Server-5.0.2.x86_64.rpm                                                                                                     |  22 kB     00:00
redhat-release-server-6Server-                                                                                         | 2.6 kB     00:00
Switching old release package with Oracle Linux...
warning: oraclelinux-release-6Server-5.0.2.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Installing base packages for Oracle Linux...
Loaded plugins: fastestmirror, security
Determining fastest mirrors
ol6_UEK_latest                                                                                                                                   | 1.2 kB     00:00
ol6_UEK_latest/primary                                                                                                                           |  16 MB     00:09
ol6_UEK_latest                                                                                                                                                  351/351
ol6_latest                                                                                                                                       | 1.4 kB     00:00
ol6_latest/primary                                                                                                                               |  41 MB     00:21
ol6_latest                                                                                                                                                  26103/26103
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package basesystem.noarch 0:10.0-4.el6 will be updated
---> Package basesystem.noarch 0:10.0-4.0.1.el6 will be an update
---> Package grub.x86_64 1:0.97-84.el6_5 will be updated
---> Package grub.x86_64 1:0.97-84.0.1.el6_5 will be an update
---> Package grubby.x86_64 0:7.0.15-5.el6 will be updated
---> Package grubby.x86_64 0:7.0.15-5.0.4.el6 will be an update
---> Package initscripts.x86_64 0:9.03.40-2.el6.centos.4 will be updated
---> Package initscripts.x86_64 0:9.03.40-2.0.1.el6_5.4 will be an update
---> Package oracle-logos.noarch 0:60.0.14-1.0.1.el6 will be obsoleting
---> Package oraclelinux-release-notes.x86_64 0:6Server-11 will be installed
---> Package plymouth.x86_64 0:0.8.3-27.el6.centos.1 will be updated
---> Package plymouth.x86_64 0:0.8.3-27.0.1.el6_5.1 will be an update
--> Processing Dependency: plymouth-core-libs = 0.8.3-27.0.1.el6_5.1 for package: plymouth-0.8.3-27.0.1.el6_5.1.x86_64
---> Package redhat-logos.noarch 0:60.0.14-12.el6.centos will be obsoleted
--> Running transaction check
---> Package plymouth-core-libs.x86_64 0:0.8.3-27.el6.centos.1 will be updated
---> Package plymouth-core-libs.x86_64 0:0.8.3-27.0.1.el6_5.1 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
 Package                                          Arch                          Version                                         Repository                         Size
 oracle-logos                                     noarch                        60.0.14-1.0.1.el6                               ol6_latest                         12 M
     replacing  redhat-logos.noarch 60.0.14-12.el6.centos
 oraclelinux-release-notes                        x86_64                        6Server-11                                      ol6_latest                         77 k
 basesystem                                       noarch                        10.0-4.0.1.el6                                  ol6_latest                        4.3 k
 grub                                             x86_64                        1:0.97-84.0.1.el6_5                             ol6_latest                        932 k
 grubby                                           x86_64                        7.0.15-5.0.4.el6                                ol6_latest                         43 k
 initscripts                                      x86_64                        9.03.40-2.0.1.el6_5.4                           ol6_latest                        940 k
 plymouth                                         x86_64                        0.8.3-27.0.1.el6_5.1                            ol6_latest                         89 k
Updating for dependencies:
 plymouth-core-libs                               x86_64                        0.8.3-27.0.1.el6_5.1                            ol6_latest                         88 k
Transaction Summary
Install       2 Package(s)
Upgrade       6 Package(s)
Total download size: 14 M
Downloading Packages:
(1/8): basesystem-10.0-4.0.1.el6.noarch.rpm                                                                                                      | 4.3 kB     00:00
(2/8): grub-0.97-84.0.1.el6_5.x86_64.rpm                                                                                                         | 932 kB     00:00
(3/8): grubby-7.0.15-5.0.4.el6.x86_64.rpm                                                                                                        |  43 kB     00:00
(4/8): initscripts-9.03.40-2.0.1.el6_5.4.x86_64.rpm                                                                                              | 940 kB     00:00
(5/8): oracle-logos-60.0.14-1.0.1.el6.noarch.rpm                                                                                                 |  12 MB     00:06
(6/8): oraclelinux-release-notes-6Server-11.x86_64.rpm                                                                                           |  77 kB     00:00
(7/8): plymouth-0.8.3-27.0.1.el6_5.1.x86_64.rpm                                                                                                  |  89 kB     00:00
(8/8): plymouth-core-libs-0.8.3-27.0.1.el6_5.1.x86_64.rpm                                                                                        |  88 kB     00:00
Total                                                                                                                                   1.5 MB/s |  14 MB     00:09
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing GPG key 0xEC551F03:
 Userid : Oracle OSS group (Open Source Software group) <>
 Package: 6:oraclelinux-release-6Server-5.0.2.x86_64 (installed)
 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : oracle-logos-60.0.14-1.0.1.el6.noarch                                                                                                               1/15
  Updating   : initscripts-9.03.40-2.0.1.el6_5.4.x86_64                                                                                                            2/15
  Updating   : plymouth-core-libs-0.8.3-27.0.1.el6_5.1.x86_64                                                                                                      3/15
  Updating   : plymouth-0.8.3-27.0.1.el6_5.1.x86_64                                                                                                                4/15
  Updating   : 1:grub-0.97-84.0.1.el6_5.x86_64                                                                                                                     5/15
  Updating   : basesystem-10.0-4.0.1.el6.noarch                                                                                                                    6/15
  Installing : oraclelinux-release-notes-6Server-11.x86_64                                                                                                         7/15
  Updating   : grubby-7.0.15-5.0.4.el6.x86_64                                                                                                                      8/15
  Cleanup    : 1:grub-0.97-84.el6_5.x86_64                                                                                                                         9/15
  Cleanup    : plymouth-0.8.3-27.el6.centos.1.x86_64                                                                                                              10/15
  Erasing    : redhat-logos-60.0.14-12.el6.centos.noarch                                                                                                          11/15
  Cleanup    : basesystem-10.0-4.el6.noarch                                                                                                                       12/15
  Cleanup    : initscripts-9.03.40-2.el6.centos.4.x86_64                                                                                                          13/15
  Cleanup    : plymouth-core-libs-0.8.3-27.el6.centos.1.x86_64                                                                                                    14/15
  Cleanup    : grubby-7.0.15-5.el6.x86_64                                                                                                                         15/15
  Verifying  : grubby-7.0.15-5.0.4.el6.x86_64                                                                                                                      1/15
  Verifying  : 1:grub-0.97-84.0.1.el6_5.x86_64                                                                                                                     2/15
  Verifying  : plymouth-0.8.3-27.0.1.el6_5.1.x86_64                                                                                                                3/15
  Verifying  : initscripts-9.03.40-2.0.1.el6_5.4.x86_64                                                                                                            4/15
  Verifying  : oracle-logos-60.0.14-1.0.1.el6.noarch                                                                                                               5/15
  Verifying  : oraclelinux-release-notes-6Server-11.x86_64                                                                                                         6/15
  Verifying  : basesystem-10.0-4.0.1.el6.noarch                                                                                                                    7/15
  Verifying  : plymouth-core-libs-0.8.3-27.0.1.el6_5.1.x86_64                                                                                                      8/15
  Verifying  : plymouth-0.8.3-27.el6.centos.1.x86_64                                                                                                               9/15
  Verifying  : initscripts-9.03.40-2.el6.centos.4.x86_64                                                                                                          10/15
  Verifying  : plymouth-core-libs-0.8.3-27.el6.centos.1.x86_64                                                                                                    11/15
  Verifying  : grubby-7.0.15-5.el6.x86_64                                                                                                                         12/15
  Verifying  : redhat-logos-60.0.14-12.el6.centos.noarch                                                                                                          13/15
  Verifying  : 1:grub-0.97-84.el6_5.x86_64                                                                                                                        14/15
  Verifying  : basesystem-10.0-4.el6.noarch                                                                                                                       15/15
  oracle-logos.noarch 0:60.0.14-1.0.1.el6                                         oraclelinux-release-notes.x86_64 0:6Server-11
  basesystem.noarch 0:10.0-4.0.1.el6          grub.x86_64 1:0.97-84.0.1.el6_5      grubby.x86_64 0:7.0.15-5.0.4.el6      initscripts.x86_64 0:9.03.40-2.0.1.el6_5.4
  plymouth.x86_64 0:0.8.3-27.0.1.el6_5.1
Dependency Updated:
  plymouth-core-libs.x86_64 0:0.8.3-27.0.1.el6_5.1
  redhat-logos.noarch 0:60.0.14-12.el6.centos
Updating initrd...
Installation successful!
Run 'yum upgrade' to synchronize your installed packages
with the Oracle Linux repository.
[root@test-centos ~]# yum upgrade
  kernel-uek-headers.x86_64 0:2.6.32-400.36.8.el6uek
  autofs.x86_64 1:5.0.5-89.0.1.el6_5.2                     bfa-firmware.noarch 0:          certmonger.x86_64 0:0.61-3.0.1.el6
  coreutils.x86_64 0:8.4-31.0.1.el6_5.2                    coreutils-libs.x86_64 0:8.4-31.0.1.el6_5.2        cpuspeed.x86_64 1:1.5-20.0.1.el6_4
  crash.x86_64 0:6.1.0-5.0.1.el6                           dbus.x86_64 1:1.2.24-7.0.1.el6_3                  dbus-glib.x86_64 0:0.86-6.el6_4
  dbus-libs.x86_64 1:1.2.24-7.0.1.el6_3                    dhclient.x86_64 12:4.1.1-38.P1.0.1.el6            dhcp-common.x86_64 12:4.1.1-38.P1.0.1.el6
  dracut.noarch 0:004-336.0.1.el6_5.2                      dracut-kernel.noarch 0:004-336.0.1.el6_5.2        e2fsprogs.x86_64 0:1.42.8-1.0.1.el6
  e2fsprogs-libs.x86_64 0:1.42.8-1.0.1.el6                 gstreamer.x86_64 0:0.10.29-1.0.1.el6              gstreamer-tools.x86_64 0:0.10.29-1.0.1.el6
  iptables.x86_64 0:1.4.7-11.0.1.el6                       iptables-ipv6.x86_64 0:1.4.7-11.0.1.el6           irqbalance.x86_64 2:1.0.4-9.0.1.el6_5
  java-1.7.0-openjdk.x86_64 1:   kexec-tools.x86_64 0:2.0.3-3.0.10.el6             kpartx.x86_64 0:0.4.9-72.0.1.el6_5.3
  libcom_err.x86_64 0:1.42.8-1.0.1.el6                     libgudev1.x86_64 0:147-               libss.x86_64 0:1.42.8-1.0.1.el6
  libudev.x86_64 0:147-                        libxml2.x86_64 0:2.7.6-14.0.1.el6_5.2             libxml2-python.x86_64 0:2.7.6-14.0.1.el6_5.2
  libxslt.x86_64 0:1.1.26-2.0.2.el6_3.1                    module-init-tools.x86_64 0:3.9-21.0.1.el6_4       nss.x86_64 0:3.16.1-4.0.1.el6_5
  nss-sysinit.x86_64 0:3.16.1-4.0.1.el6_5                  nss-tools.x86_64 0:3.16.1-4.0.1.el6_5             oprofile.x86_64 0:0.9.7-1.0.1.el6
  pango.x86_64 0:1.28.1-7.0.1.el6_3                        plymouth-scripts.x86_64 0:0.8.3-27.0.1.el6_5.1    policycoreutils.x86_64 0:2.0.83-
  ql2400-firmware.noarch 0:7.03.00-1.0.1.el6               ql2500-firmware.noarch 0:7.03.00-1.0.1.el6        redhat-lsb.x86_64 0:4.0-7.0.1.el6
  redhat-lsb-compat.x86_64 0:4.0-7.0.1.el6                 redhat-lsb-core.x86_64 0:4.0-7.0.1.el6            redhat-lsb-graphics.x86_64 0:4.0-7.0.1.el6
  redhat-lsb-printing.x86_64 0:4.0-7.0.1.el6               rsyslog.x86_64 0:5.8.10-8.0.1.el6                 selinux-policy.noarch 0:3.7.19-231.0.1.el6_5.3
  selinux-policy-targeted.noarch 0:3.7.19-231.0.1.el6_5.3  sos.noarch 0:2.2-47.0.1.el6_5.7                   system-config-network-tui.noarch 0:1.6.0.el6.3-1.0.1.el6
  systemtap-runtime.x86_64 0:2.3-4.0.1.el6_5               udev.x86_64 0:147-                    yum.noarch 0:3.2.29-43.0.1.el6_5
  yum-plugin-fastestmirror.noarch 0:1.1.30-17.0.1.el6_5    yum-plugin-security.noarch 0:1.1.30-17.0.1.el6_5  yum-utils.noarch 0:1.1.30-17.0.1.el6_5
  kernel-headers.x86_64 0:2.6.32-431.29.2.el6

[root@test-centos ~]# cat /etc/oracle-release
Oracle Linux Server release 6.5
[root@test-centos ~]# rpm -qi --info "oraclelinux-release"
Name        : oraclelinux-release          Relocations: (not relocatable)
Version     : 6Server                           Vendor: Oracle America
Release     : 5.0.2                         Build Date: Sat 23 Nov 2013 02:14:50 AM ICT
Install Date: Fri 19 Sep 2014 03:54:33 PM ICT      Build Host:
Group       : System Environment/Base       Source RPM: oraclelinux-release-6Server-5.0.2.src.rpm
Size        : 49559                            License: GPL
Signature   : RSA/8, Sat 23 Nov 2013 02:14:56 AM ICT, Key ID 72f97b74ec551f03
Summary     : Oracle Linux 6 release file
Description :
System release and information files
Name        : oraclelinux-release          Relocations: (not relocatable)
Version     : 6Server                           Vendor: Oracle America
Release     : 5.0.2                         Build Date: Sat 23 Nov 2013 02:14:50 AM ICT
Install Date: Fri 19 Sep 2014 03:54:33 PM ICT      Build Host:
Group       : System Environment/Base       Source RPM: oraclelinux-release-6Server-5.0.2.src.rpm
Size        : 49559                            License: GPL
Signature   : RSA/8, Sat 23 Nov 2013 02:14:56 AM ICT, Key ID 72f97b74ec551f03
Summary     : Oracle Linux 6 release file
Description :
System release and information files
[root@test-centos ~]#It's very fast... Written By: Surachart Opun
Categories: DBA Blogs