Feed aggregator

OTN Interview about Application Development with Oracle

Shay Shmeltzer - Tue, 2016-03-15 14:34

A few weeks ago, I set down with Bob from OTN for an interview that covered some of the key products our group works on.

I covered the various frameworks (ADF, JET, MAF), what we are doing with cloud based development (DevCS) and our tools for citizen developers (ABCS).

In case you are interested in any of these acronyms here is the video:

Note that things move really fast at Oracle, and since this interview we already released a new version of Oracle JET and also made it open source, we released an update to Developer Cloud Service, and Application Builder Cloud Service has gone production.

Categories: Development

Converting Hortonworks Sandbox to run on Hyper-V

Pythian Group - Tue, 2016-03-15 10:58

It looks like Hortonworks recently decided to stop hosting a version of their Sandbox VM for Windows Hyper-V. I only see VirtualBox and VMware versions listed.

What if, like me, your primary learning lab machine runs Hyper-V?

Well, you can convert it fairly easily. My method is to use VirtualBox to do this.

I run VirtualBox on my Mac because it’s free, it has free conversion tools and I usually only run 1-2 VMs on it, but my Mac isn’t my learning lab. This tip WILL work on a Windows machine that has VirtualBox installed.

Note that VirtualBox and Hyper-V may not get along well if installed on the same device, hence my using two machines to do this.

In order to convert it, here’s what you need to do.

  • Download the VirtualBox Sandbox VM here.
  • Follow Hortonworks’ instructions to import the appliance into VirtualBox.
  • Find the disk that it created by looking at the properties of the VM you just created.
  • Open a terminal and navigate to that directory.
  • From that directory, run this:

VBoxManage clonehd Hortonworks_sanbox_with_hdp_2_4_virtualbox-disk1.vmdk HDP2.4.vhd --format vhd

This process runs for a bit and creates a copy in VHD format, which you can copy onto, and run from; any Hyper-V machine.

Simply create a new Hyper-V machine, as you normally would, but instead of creating a new disk, choose this one and fire it up.

On the subject of VM Config, you should give it access to your internal network so that you can access it via browser, a couple processors and on memory, a word of caution: when I did this with dynamic memory enabled, the VM took all of my available system memory, so you may want to limit consumption to a number that reserves some computing power for the host and any other VMs you may want to run in parallel.

After mounting and starting my new Hyper-V, VM I found that I hadn’t allocated enough RAM or processor and it was “dying” on boot so I upped the RAM to 6 GB and 4 processors from 2 GB and 1 respectively.

Next up, eth0 wasn’t found on boot so I checked what Google had to say and found this article.

I edited the first file, and upon checking the second (/etc/sysconfig/network-scripts/ifcfg-eth0) I found that the MAC address was not recorded so didn’t have anything to do.

I saved, rebooted, and watched and eth0 was found at this time – of course the VirtualBox add-in failed at boot, but that isn’t a big deal.

When the VM came up, it instructed me to connect to 127.0.0.1:8888 which didn’t work. I looked up the IP assigned by my router put that IP (without a port) into my browser and was able to connect without issue.

Happy learning!

Categories: DBA Blogs

There’s more to IT than just coding

Pythian Group - Tue, 2016-03-15 10:00

 

March 2, 2016 was officially the midpoint of the Technovation Challenge in Ottawa. The 2016 competition started on Sunday January 17, at Carleton University, where Anar Simpson, Global Ambassador for Technovation, kicked off the program.

Technovation is a global technology entrepreneurship competition for young women that sets out to prove that there’s more than just coding in the technology sector. The program is designed to inspire and educate young women to pursue a career in technology by showing them all aspects of starting a technology business.

Regional Technovation Chapters contact local high schools to introduce the program and recruit teams of high school girls. Thanks to the efforts of Jennifer Francis, chair of Women Powering Technology, an Ottawa Chapter of Technovation that started up in January 2015. The pilot was such a success, participation in 2016 has doubled with over 100 high school and middle school girls participating and 30 female mentors from the Ottawa tech sector.

In addition to IBM, Shopify, and L-Spark, Pythian is a proud sponsor of the 2016 competition. Having just announced the Pythia program , it was a natural fit for Pythian to sponsor Technovation. The Pythia program focuses on increasing the percentage of talented women who work at Pythian, especially in tech roles. It also encourages and supports the participation of girls and women in STEM fields, which is exactly what Technovation is all about.

The support of the sponsors allows the teams to meet weekly at the  sponsor’s facilities. Here the teams, along with their mentors, guest speakers and instructor from Carleton University’s Technology Innovation Management (TIM) program, focus on identifying a problem facing their community, creating an App to solve the problem, code the App, build a company, and pitch their business plan to experts in the field! It’s pretty impressive considering the high school girls squeeze this competition in on top of their day-to-day school classes and after-school activities. They are all committed and dedicated – a great sign of future leaders!

“My views of working in the technology sector have changed, since it feels like something anyone can be a part of, whereas it was a distant idea before,” said 17-year-old Doris Feng, a student at Merivale High School and member of the team Women With Ambition. “I came in with the notion that we would be coding during the first week, but it turns out much of the development takes place off screen, with many hours dedicated to brainstorming, surveying users, drawing a paper prototype, and mulling over the ideas with team members.”

I couldn’t have said it better Doris! This is exactly what happens in the real world.

Technovation is a program designed to inspire women to pursue the entrepreneurial spirit in all of us. For more information on Technovation and starting your own local chapter, visit Technovation online. Globally, Technovation is sponsored by Adobe Foundation, Google, Verizon, CA Technologies, Intel and Oracle, in partnership with UN Women, UNESCO and MIT Media Lab.

Categories: DBA Blogs

Revisiting "Continued..."

Tim Dexter - Tue, 2016-03-15 06:39

Adding "Continued.." to the bottom of a table if the content spills over more than one page is a very common requirement for Customer Bills. I am sure most of you have already seen Tim's blog on this topic. Just wanted to add a small note here which I got as a quick tidbit from our template expert, Hok Min. This requirement came from a telecom customer:

  1. The invoice had multiple tables giving different bill breakups such as "Current Charges", "Usage Charges", "Discounts", "Itemized bills for Local Calls", "Itemized bills for STD Calls" etc. Among these, any of the table could spill over to next page in any of the pages.
  2. The itemized bills were grouped under a category "Your Itemized Bill"

The requirement was

  1. Whenever a table splits across page, the next page should repeat the table header and should also display "(Continued ..)" in the table header 
  2. If the table is inside the category - "Your Itemized Bill", then the heading "Your Itemized Bill" should repeat in the next page added with "(Continued ..)" text
  3. With multiple tables within the category "Your Itemized Bill", the "(Continued ...)" message should be displayed for all tables if they split across page.

This can be seen here in the images:

Page 1: Here "YOUR ITEMIZED BILL" and "Local Calls" starts in this page. 



Page 2: Here "YOUR ITEMIZED BILL" and "Local Calls" are in continuation from previous page while "STD Calls" table starts in this page.



Page 3: Here "YOUR ITEMIZED BILL" and "STD Calls" are in continuation from previous page. 


We can use the same code logic that was explained in Tim's blog. The main thing to note here is that the init-page-total should be included within each table. If the init statement of a table is kept outside then it will not be able to reset the context to display "Continued ..." correctly. Here the first two rows of the external table and the nested table are marked to "Repeat as header row at the top of each page". The itemized bills are displayed grouped by date, therefore for-each-group is done in the third row of the nested table and the last row has the for-each loop to display each transaction.


The below image shows the code corresponding to the above table design. Notice the use of  display-condition="exceptfirst" so that "(Continued..)" text will show in all table-headers except the first one. 

You can find the sample RTF template and XML data here

Stay tuned for more updates... 

Enjoy :) !! 

Categories: BI & Warehousing

A new law of office life

Andrew Clarke - Tue, 2016-03-15 02:46
I posted my Three Laws of Office Life a long while back. Subsequent experience has revealed another one: Every office kitchen which has a sign reminding people to do their washing-up has a concomitant large pile of unwashed crockery and dirty cutlery.

People wash their own mug and cereal bowl, but are less rigorous with the crockery from the kitchen cupboard. This phenomenon will be familiar to anybody who has shared a house during their student days or later.

Don't think that installing a dishwasher will change anything: it merely transfers the problem. Someone who won't wash up a mug is even less likely to unload a dishwasher. There is only one workable solution, and that is to have no office kitchen at all. (Although this creates a new problem, as vending machine coffee is universally vile and the tea unspeakable.)

So the Pile of Washing Up constitutes an ineluctable law, but it is the fourth law and we all know that the canon only admits sets of three laws. One must go. Since I first formulated these laws cost-cutting in the enterprise has more-or-less abolished the practice of providing biscuits at meetings. Hence the old Second Law no longer holds, and creates a neat vacancy.

Here are the revised Laws of Office Life:

First law: For every situation there is an equal and apposite Dilbert cartoon.

Second Law: Every office kitchen which has a sign reminding people to do their washing-up has a concomitant large pile of unwashed crockery and dirty cutlery.

Third Law: The bloke with the most annoying laugh is the one who finds everything funny.

GDC16 Day 1: Daily Round Up

Oracle AppsLab - Tue, 2016-03-15 01:54

Hello everyone! I wrapped up the first day at the Games Developers Conference (GDC) in San Francisco! It’s the first Monday after daylight savings so a morning cup of joe in Moscone West was a welcomed sight!

gdc_logo

First Thoughts

Wow! All of the VR sessions were very popular and crowded. In the morning, I was seated in the overflow room for the HTC Vive session. Attendees were lucky to be able to go to 2 VR sessions back-to-back. There would be lines wrapping around the halls and running into other lines. By the afternoon, when foot traffic was at its highest, it was easy to get confused as to which line belonged to which session. Luckily, the organizers took into account the popularity of the VR sessions and moved it to the larger rooms for the next 4 days!

On the third floor, there was a board game area where everyone could play the latest board game releases like Pandemic Legacy and Mysterium as well as a VR play area where everyone could try out the Vive and other VR games.

Sessions & Take Aways

I sat in on 6 sessions:

  • A Year in Roomscale: Design Lessons from the HTC Vive and Beyond.
    •  You are not building a game, but an experience. Players are actually doing something actively with their hands vs. a game controller.
    • There are 3 questions that players ask when they are starting a VR experience that should be addressed:
      • (a) Who am I?
      • (b) What am I supposed to do?
      • (c) How do I interact with the environment?
    • Permissability. New players always ask when they are allowed to interact with something, but there are safety issues when they get too comfortable. One developer told a story about how a player actually tried to dive headfirst into a pool while wearing a VR device!
    • Don’t have music automatically playing when they enter the game. It’s not natural in the real world. It’s better to have a boom box and have them turn on the music instead. In addition, audio is still hard to do perfectly. Players expect perfect audio by default. If they pick up a phone, they expect to hear it out of 1 ear, not both.
  • Social Impact: Leveraging Community for Monetization, User Acqusition and Design.
    • Social Whales (SW) have high social value thus have the highest connection to other players and are key to a high ROI . SWs makes it easy for other players to connect with one another.
    • There are 3 standard profiles that players fall under:
      • (a) The atypical social whales that always want the best things.
      • (b) The trendsetter, the one who wants to unite and lead.
      • (c) The trend spotter, the players who want to be a part of something.
    • When a social whale leaves a games, ROI falls and other players leave. This is because that 2nd degree connection is gone. To keep players from leaving, it’s important to have game mechanics that addresses the following player needs:
      • (a) Players want to belong.
      • (b) Players want recognition as a valuable member.
      • (c) Players want their in-game group to be recognized as the best vs. other groups.
  • Menus Suck.
    • A very interesting talk on rethinking how players access key menu items in VR.
    • Have a following object like a cat! Touching different parts of the object will allow you to select different things. It’s much easier than walking back and forth between a menu and what you have to do.
      • Job Simulator uses retro cartridges for menu selection.
    •  Create menu shortcuts with the player’s body. Have the user pull things out of different parts of their head (below).
    •  Eating as an interaction. In job simulator you can eat a cake marked with an “Exit” to exit the game. The cake changes to another dessert item marked with an “Are you sure?” to ensure the exit.
  • Improving Playtesting through Workshops Focusing on Exploring.
    • For games, we are experience testing (playtesting) not performing a usability test.
    • For games, especially for VR, comfort comes first. Right after that it’s ease of use.
    • When exploring desired experiences for a game, create a composition box to ensure you get ideas from all views of your development team.
    • When observing play, look for actions (e.g. vocalizations, gestures) as well as for changes in posture and focus.
  • The Tower of Want.
    • Learn critical questions our designs must answer to engage players over the long term.
    • Follow the “I want to..” and “so I can…” framework to unearth player’s short term and long term goals. Instead of asking why 5 times like we do in user research, we ask then to complete the framework’s “so I can…” sentence (e.g. I want to get good grades so I can get into college…so I can get a good job…so I can make a lot of money…so I can buy a house).
    • The framework creates a ladder of motivations that incentivizes a player to complete each game level in that ladder daily.
  • Cognitive Psychology of Virtual Reality: Basics, Problems and Tips.
    • Psychology is the physics of VR.
    • Use redirected walking to keep players within the same space.
    • Design for optical flow. Put shadows over areas where users are not concentrating on. It’ll help with dizziness.
    • Players underestimate depth by up to 50%.
      • Add depth by adding transitional rooms (portals). This helps ease the players into their new environment.
    • Players can see a maximum of 6 meters ahead of them for 3D.
      • In their peripherals, they can only see 2D.
      • Design with the mind that 20–30% of the population has problems with stereoscopic vision.

Possibly Related Posts:

Quiz

Jonathan Lewis - Mon, 2016-03-14 16:38

Can you spot anything that might appear to be a little surprising about this (continuous) extract from a trace file ? The example is from 10.2.0.5, but 11g and 12c could produce similar results (or direct path read equivalents):


PARSING IN CURSOR #3 len=23 dep=0 uid=30 oct=3 lid=30 tim=112607775392 hv=4235652837 ad='2f647320'
select count(*) from t1
END OF STMT
PARSE #3:c=0,e=53,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=112607775385
EXEC #3:c=0,e=99,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=112607787370
WAIT #3: nam='SQL*Net message to client' ela= 9 driver id=1111838976 #bytes=1 p3=0 obj#=10443 tim=112607789931
WAIT #3: nam='db file sequential read' ela= 415 file#=5 block#=9 blocks=1 obj#=21580 tim=112607795682
WAIT #3: nam='db file scattered read' ela= 2785 file#=5 block#=905 blocks=98 obj#=21580 tim=112607801263
WAIT #3: nam='db file scattered read' ela= 2919 file#=5 block#=777 blocks=128 obj#=21580 tim=112607808280
WAIT #3: nam='db file scattered read' ela= 2066 file#=5 block#=649 blocks=128 obj#=21580 tim=112607813300
WAIT #3: nam='db file scattered read' ela= 1817 file#=5 block#=521 blocks=128 obj#=21580 tim=112607817243
WAIT #3: nam='db file scattered read' ela= 1563 file#=5 block#=393 blocks=128 obj#=21580 tim=112607820899
WAIT #3: nam='db file scattered read' ela= 1605 file#=5 block#=265 blocks=128 obj#=21580 tim=112607824710
WAIT #3: nam='db file scattered read' ela= 1529 file#=5 block#=137 blocks=128 obj#=21580 tim=112607828296
WAIT #3: nam='db file scattered read' ela= 1652 file#=5 block#=10 blocks=127 obj#=21580 tim=112607831946
FETCH #3:c=15625,e=41568,p=994,cr=996,cu=0,mis=0,r=1,dep=0,og=1,tim=112607834004
WAIT #3: nam='SQL*Net message from client' ela= 254 driver id=1111838976 #bytes=1 p3=0 obj#=21580 tim=112607835527
FETCH #3:c=0,e=3,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=112607836780
WAIT #3: nam='SQL*Net message to client' ela= 2 driver id=1111838976 #bytes=1 p3=0 obj#=21580 tim=112607837935
WAIT #3: nam='SQL*Net message from client' ela= 14371 driver id=1111838976 #bytes=1 p3=0 obj#=21580 tim=112607853526
=====================
PARSING IN CURSOR #2 len=52 dep=0 uid=30 oct=47 lid=30 tim=112607855239 hv=1029988163 ad='2f6c5ec0'
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
END OF STMT
PARSE #2:c=0,e=51,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=112607855228
WAIT #2: nam='SQL*Net message to client' ela= 4 driver id=1111838976 #bytes=1 p3=0 obj#=21580 tim=112607861803
EXEC #2:c=0,e=1271,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,tim=112607862976
WAIT #2: nam='SQL*Net message from client' ela= 1093883 driver id=1111838976 #bytes=1 p3=0 obj#=21580 tim=112608958078
STAT #3 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=996 pr=994 pw=0 time=41588 us)'
STAT #3 id=2 cnt=8332 pid=1 pos=1 obj=21580 op='TABLE ACCESS FULL T1 (cr=996 pr=994 pw=0 time=144816 us)'

Update

If you look at the values for block# in the “db file scattered read” waits you’ll notice that they appear in descending order. This looks like a tablescan running backwards – and that’s not a coincidence, because that’s what it is.

It’s obviously a good strategy to have because if you do a big tablescan it’s the blocks at the END of the table which are mostly likely to be subject to change by other sessions [unless, see comment 4, you’ve done a purge of historic data] and the longer it takes you to get there the more work you’ll have to do to get consistent read versions of the last blocks in the table, so reading the last blocks first should, generally, reduce the workload – and the risk of ORA-01555: snapshot too old. Strangely it’s not documented – but it’s been around for years – at least since 10.2.0.5, if not earlier releases of 10g, through event 10460.

The topic came up in a conversation on the Oracle-L list server a few years ago, with Tanel Poder supplying the event number, but I had forgotten about it until I rediscovered the thread by accident a little while ago.

It’s not a supported feature, of course – but if you run into serious performance problems with tablescans doing lots of work with the undo tablespace (physical reads, lots of undo records applied for consistent read, etc.) while a lot of update activity is going on, then have a chat with Oracle support to see if it’s an allowed workaround.

 

 


Gluent New World: In-Memory Processing for Databases

Tanel Poder - Mon, 2016-03-14 14:52

As Gluent is all about gluing together the old world and new world in enterprises, it’s time to announce the Gluent New World webinar series!

The Gluent New World sessions cover the important technical details behind new advancements in enterprise technologies that are arriving into mainstream use.

These seminars help you to stay current with the major technology changes that are inevitably arriving into your company soon (if not already). You can make informed decisions about what to learn next – to still be relevant in your profession also 5 years from now.

Think about software-defined storage, open data formats, cloud processing, in-memory computation, direct attached storage, all-flash and distributed stream processing – and this is just a start!

The speakers of this series are technical experts in their field – able to explain in detail how the technology works internally, which fundamental changes in the technology world have enabled these advancements and why it matters to all of you (not just the Googles and Facebooks out there).

I picked myself as the speaker for the first event in this series:

Gluent New World: In-Memory Processing for Databases

In this session, Tanel Poder will explain how the new CPU cache-efficient data processing methods help to radically speed up data processing workloads – on data stored in RAM and also read from disk! This is a technical session about internal CPU efficiency and cache-friendly data structures, using Oracle Database and Apache Arrow as examples.

Time:

  • Tue, Mar 22, 2016 1:00 PM – 2:00 PM CDT

Register:

After registering, you will receive a confirmation email containing information about joining the webinar.

See you soon!

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

How to run OpenTSDB with Google Bigtable

Pythian Group - Mon, 2016-03-14 12:49

In a previous post (OpenTSDB and Google Cloud Bigtable) we discussed OpenTSDB, an open source distributed database specifically designed for storing timeseries data. We also explained how OpenTSDB relies on Apache HBase for a reliable and scalable data backend. However, deployment and administration of an HBase cluster is not a trivial task, as it requires a full Hadoop setup. This means that it takes a big data engineer (or better a team of them) to plan for the cluster sizing, provision the machines and setup the Hadoop nodes, configure all services and tune them for optimal performance. If this is not enough, Operations teams have to constantly monitor the cluster, deal with hardware and service failures, perform upgrades, backup regularly, and a ton of other tasks that make maintenance of a Hadoop cluster and OpenTSDB a challenge for most organizations.

With the release of Google Bigtable as a cloud service and its support for the HBase API, it was obvious that if we managed to integrate OpenTSDB with Google Bigtable, we would enable more teams to have access to the powerful functionality of OpenTSDB by removing the burden from maintaining an HBase cluster.

Nevertheless, integration of OpenTSDB with Bigtable was not as seamless as dropping a few jars in its release directory. This happened because the OpenTSDB developers went over and above the standard HBase libraries, by implementing their very own asynchbase library. Asynchbase is a fully asynchronous, non-blocking, thread-safe, high-performance HBase API. And no one can put it better than the asynchbase developers themselves who claim that ‘This HBase client differs significantly from HBase’s client. Switching to it is not easy as it requires one to rewrite all the code that was interacting with any HBase API.’

This meant that integration with Google Bigtable required OpenTSDB to switch back to the standard HBase API. We saw the value of such an effort here at Pythian and set about developing this solution.

The asyncbigtable library

Today, we are very happy to announce the release of the asyncbigtable library. The asyncbigtable library is a 100% compatible implementation of the great asynchbase library that can be used as a drop in replacement and enable OpenTSDB to use Google Bigtable as a storage backend.

Thanks to support from the OpenTSDB team, the asyncbigtable code is hosted in the OpenTSDB GitHub repository.

Challenges

To create asyncbigtable we had to overcome two great challenges. The first one was that OpenTSDB assumes that the underlying library (until now asynchbase) performs asynchronous and non-blocking operations. On the other hand, the standard HBase API only supports synchronous and blocking calls. As a workaround for this, we used the BufferedMutator  implementation that collects all Mutation operations in a buffer and performs them in batches, allowing for mutations with an extremely low latency.

The second challenge stemmed from the fact that the OpenTSDB project has a very limited set of jar dependencies, that are explicitly defined in Makefiles. Contrary to this spartan approach, HBase and Bigtable client libraries have a significant number of transitive dependencies. Since, adding those dependencies one-by-one in the OpenTSDB build process would complicate its dependency management, we decided to  package all asyncbigtable dependencies in an uber-jar using the Maven assembly plugin. Therefore, building OpenTSDB with asyncbigtable support is now as simple as downloading a single beefy jar.

Build stepsBefore you start

Before you build OpenTSDB with Google Bigtable support, you must complete the following required steps:

  1. Create a Google Bigtable cluster (https://cloud.google.com/bigtable/docs/creating-cluster)
  1. Install HBase shell with access to the Google Bigtable cluster (https://cloud.google.com/bigtable/docs/installing-hbase-shell)
  1. Download and install the required tools for compiling OpenTSDB from source (http://opentsdb.net/docs/build/html/installation.html#compiling-from-source)
Build and run OpenTSDB
  1. Clone and build the modified source code from the Pythian github repository:

git clone -b bigtable git@github.com:pythian/opentsdb.git
cd opentsdb
sh build-bigtable.sh

  1. Create OpenTSDB tables

OpenTSDB provides a script that uses HBase shell to create its tables.  To create the tables run the following command:
env COMPRESSION=NONE HBASE_HOME=/path/to/hbase-1.1.2 \
./src/create_table.sh

  1. Run OpenTSDB

export HBASE_CONF=/path/to/hbase-1.1.2/conf
mkdir -p <tmp_dir>
./build/tsdb tsd --port=4242 --staticroot=build/staticroot \
--cachedir=<tmp_dir>

Future work

By all means our work on asyncbigtable does not stop here. We are putting great effort towards improving the library to achieve the high quality standards of the rest of OpenTSDB code. Our first priority is to test the library against most real world scenarios and achieve the highest quality. In the future, we plan to benchmark the performance of OpenTSDB with Bigtable and compare how it competes against HBase.

We are also working on building a true asynchronous implementation of the asyncbigtable library by integrating deeper with the Google Bigtable API.

Acknowledgements

We would like to thank the OpenTSDB developers (Benoît Sigoure and Chris Larsen) for their brilliant work in building such great software and for embracing the asyncbigtable library. Their insights and code contributions helped us deal with some serious issues. Also, we would like to thank the Google Cloud Bigtable team because they expressed genuine interest in this project and they were very generous in providing us with cloud infrastructure and excellent support.

Categories: DBA Blogs

#EMd360 … OEM health checks made easy

DBASolved - Mon, 2016-03-14 12:01

Oracle Enterprise Manager 12c is a great tool! Now that 13c is out, it is getting even better. This post however it not really about Oracle Enterprise Manager, rather than a quick and simple health check tool that I’ve put together. With the help of of some really cool co-workers (Carlos Sierra and Mauro Pagano), I’ve put together a small diagnostic tool call EMd360.

EMd360 stands for Enterprise Manager d360. The concept behind this tool is just like other tools that have been released with the 360 concept (edb360 and sqld360); to provide a quick and easy approach to checking an environment. As with edb360 and sqld360, EMd360 is a completely free tool for anyone to use.

So, why is there a need for EMd360? It is quite simple, there are so many things that go into OEM and you get so much out of OEM it is overwhelming. As a consultant, I’ve been asked to review a lot of OEM architectures and the associated performance. A lot of this information is in the OMR and often time I’m using other tools like REPVFY and OMSVFY, plus a handful of scripts. I’ve decided to make my life (and hopefully yours) a bit easier by building EMd360.

The first (base) release of EMd360 is now live on GitHub (https://github.com/dbasolved/EMd360.git). Go and get it! Test it out!

Download

If you are interested in trying out EMd360, you can download it from GitHub.

Instructions

Download EMd360 from GitHub as a zip file
Unzip EMd360-master.zip on the OMR server and navigate to the directory where you unzipped it
Connect to the OMR using SQL*Plus and execute @emd360.sql

Options

The @emd360.sql script take two variables. You will be prompted for them if not passed on the sql command line.

Variable 1 – Server name of the Oracle Management Service (without domain names)
Variable 2 – Oracle Management Repository name (database SID)

Example:

$ sqlplus / as sysdba
SQL> @emd360 pebble oemrep

Let me know your thoughts and if there is something you would like to see in it. Every environment is different and there maybe something you are looking for that is not provided. Let me know via email or blog comment and I’ll try to get it added in the next release.

Enjoy!!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

#EMd360 … OEM health checks made easy

DBASolved - Mon, 2016-03-14 12:01

Oracle Enterprise Manager 12c is a great tool! Now that 13c is out, it is getting even better. This post however it not really about Oracle Enterprise Manager, rather than a quick and simple health check tool that I’ve put together. With the help of of some really cool co-workers (Carlos Sierra and Mauro Pagano), I’ve put together a small diagnostic tool call EMd360.

EMd360 stands for Enterprise Manager d360. The concept behind this tool is just like other tools that have been released with the 360 concept (edb360 and sqld360); to provide a quick and easy approach to checking an environment. As with edb360 and sqld360, EMd360 is a completely free tool for anyone to use.

So, why is there a need for EMd360? It is quite simple, there are so many things that go into OEM and you get so much out of OEM it is overwhelming. As a consultant, I’ve been asked to review a lot of OEM architectures and the associated performance. A lot of this information is in the OMR and often time I’m using other tools like REPVFY and OMSVFY, plus a handful of scripts. I’ve decided to make my life (and hopefully yours) a bit easier by building EMd360.

The first (base) release of EMd360 is now live on GitHub (https://github.com/dbasolved/EMd360.git). Go and get it! Test it out!

Download

If you are interested in trying out EMd360, you can download it from GitHub.

Instructions

Download EMd360 from GitHub as a zip file
Unzip EMd360-master.zip on the OMR server and navigate to the directory where you unzipped it
Connect to the OMR using SQL*Plus and execute @emd360.sql

Options

The @emd360.sql script take two variables. You will be prompted for them if not passed on the sql command line.

Variable 1 – Server name of the Oracle Management Service (without domain names)
Variable 2 – Oracle Management Repository name (database SID)

Example:

$ sqlplus / as sysdbaSQL> @emd360 pebble oemrep

Let me know your thoughts and if there is something you would like to see in it. Every environment is different and there maybe something you are looking for that is not provided. Let me know via email or blog comment and I’ll try to get it added in the next release.

Enjoy!!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

OGh DBA Day – Call for Papers!

Marco Gralike - Mon, 2016-03-14 11:41
While part of this packed event is already underway (SQL Celebration Day bit), the DBA…

Changing SOA properties via WLST

Marc Kelderman - Mon, 2016-03-14 09:30


Hereby a script to change some properties for SOA Suite. These are some generic settings such as:
  • soa-infra
  • AuditLevelGlobalTxMaxRetry
  • DisableCompositeSensors
  • DisableSpringSESensors
  • mediator
  • AuditLevel
  • bpel
  • AuditLevel
  • SyncMaxWaitTime
  • Recovery Schedule Config
import java.io.IOException;
import java.net.MalformedURLException;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import javax.management.Attribute;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.Query;
import javax.management.QueryExp;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.OpenDataException;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

connect('weblogic', 'Welcome1', 't3://myhost:7001')
domainRuntime()

#
# soa-infra
#
SOAInfraConfigobj = ObjectName('oracle.as.soainfra.config:Location=MS1,name=soa-infra,type=SoaInfraConfig,Application=soa-infra')

# Off, Production and Development
SOAattribute = Attribute('AuditLevel', 'Production')
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)

print '*** soa-infra: set AuditLevel', mbs.getAttribute(SOAInfraConfigobj, 'AuditLevel')

SOAattribute = Attribute('GlobalTxMaxRetry', 0)
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** soa-infra: set GlobalTxMaxRetry', mbs.getAttribute(SOAInfraConfigobj, 'GlobalTxMaxRetry')

SOAattribute = Attribute('DisableCompositeSensors', true)
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** soa-infra: set DisableCompositeSensors', mbs.getAttribute(SOAInfraConfigobj, 'DisableCompositeSensors')

SOAattribute = Attribute('DisableSpringSESensors', true)
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** soa-infra: set DisableSpringSESensors', mbs.getAttribute(SOAInfraConfigobj, 'DisableSpringSESensors')

#
# Mediator
#
SOAInfraConfigobj = ObjectName('oracle.as.soainfra.config:Location=MS1,name=mediator,type=MediatorConfig,Application=soa-infra')

SOAattribute = Attribute('AuditLevel', 'Inherit')
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** mediator: set AuditLevel', mbs.getAttribute(SOAInfraConfigobj, 'AuditLevel')

#
# BPEL
#

SOAInfraConfigobj = ObjectName('oracle.as.soainfra.config:Location=MS1,name=bpel,type=BPELConfig,Application=soa-infra')

SOAattribute = Attribute('SyncMaxWaitTime', 120)
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)

print '*** bpel: set SyncMaxWaitTime', mbs.getAttribute(SOAInfraConfigobj, 'SyncMaxWaitTime')

# AuditLevel
#   off: 0
#   inherit: 1
#   minimal: 2
#   production: 3
#   development: 4
#   onerror: 5

SOAattribute = Attribute('AuditLevel', 'production')
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** bpel: set AuditLevel', mbs.getAttribute(SOAInfraConfigobj, 'AuditLevel')

#javax.management.ObjectName
SOAInfraConfigobj = ObjectName('oracle.as.soainfra.config:Location=mwpton-MS1,name=bpel,type=BPELConfig,Application=soa-infra')

#javax.management.openmbean.CompositeDataSupport
rec_config_obj  = mbs.getAttribute(SOAInfraConfigobj, 'RecoveryConfig')

rec_keySet = rec_config_obj.getCompositeType().keySet()
rec_keys = rec_keySet.toArray()
rec_keyitems = [ rec_key for rec_key in rec_keys ]

#javax.management.openmbean.CompositeDataSupport
rec_cluster_obj = rec_config_obj.get('ClusterConfig')
rec_recurrr_obj = rec_config_obj.get('RecurringScheduleConfig')
rec_startup_obj = rec_config_obj.get('StartupScheduleConfig')

#
# StartupScheduleConfig
#
cnt = 0

# java.util.Collections.UnmodifiableSet
keySet = rec_startup_obj.getCompositeType().keySet()

# array
keys = keySet.toArray()

# list
keyitems = [ key for key in keys ]

# array
values = rec_startup_obj.getAll(keyitems)

for key in keys:
  if key == 'maxMessageRaiseSize':
    values[cnt] = 0
    print '*** bpel: set RecurringScheduleConfig:maxMessageRaiseSize ' + key + ' to value ' + str(values[cnt])
  cnt = cnt + 1

#javax.management.openmbean.CompositeDataSupport
new_rec_startup_obj = CompositeDataSupport(rec_startup_obj.getCompositeType(), keyitems, values)

#
# RecurringScheduleConfig
#
cnt = 0

keySet = rec_recurrr_obj.getCompositeType().keySet()
keys = keySet.toArray()
keyitems = [ key for key in keys ]
values = rec_recurrr_obj.getAll(keyitems)

for key in keys:
  if key == 'maxMessageRaiseSize':
    values[cnt] = 0
    print '*** bpel: set RecurringScheduleConfig:maxMessageRaiseSize ' + key + ' to value ' + str(values[cnt])
  if key == 'startWindowTime':
    values[cnt] = "00:00"
    print '*** bpel: set RecurringScheduleConfig:startWindowTime ' + key + ' to value ' + str(values[cnt])
  if key == 'stopWindowTime':
    values[cnt] = "00:00"
    print '*** bpel: set RecurringScheduleConfig:stopWindowTime ' + key + ' to value ' + str(values[cnt])
  cnt = cnt + 1

#javax.management.openmbean.CompositeDataSupport
new_rec_recurrr_obj = CompositeDataSupport(rec_recurrr_obj.getCompositeType(), keyitems, values)

pyMap = { "ClusterConfig":rec_cluster_obj, "RecurringScheduleConfig":new_rec_recurrr_obj, "StartupScheduleConfig":new_rec_startup_obj }
javaMap = java.util.HashMap()
for k in pyMap.keys():
  javaMap[k] = pyMap[k]

new_rec_config_obj = CompositeDataSupport(rec_config_obj.getCompositeType(), javaMap)

#javax.management.Attribute
SOAattribute = Attribute('RecoveryConfig', new_rec_config_obj)

mbs.setAttribute(SOAInfraConfigobj, SOAattribute)

Bug in Ointment: ORA-600 in Online Datafile Move

Pythian Group - Mon, 2016-03-14 09:02

Instead of using ‘fly in ointment’, I have used ‘Bug in Ointment’ because in this prolonged Australian summer, my backyard is full of bugs (to the sheer delight of my bug-loving son, at the same time causing much anxiety among the rest of us). When your backyard is full of bugs and you get bugs in a database, it’s only natural to customize the idioms.

Oracle 12c has been warming up the hearts of database aficionados in various ways with its features. One of the celebrated features is the online datafile moving and renaming. Lots has been written about it and suffice to say that we don’t need any down time in order to move, rename, or copy the data files anymore. It’s an online operation with zero down time incurring a slight performance overhead.

I was playing with this feature on my test system with Oracle 12.1 on OEL 6, and when moving a datafile in a pluggable database I got this error:

ORA-600 [kpdbGetOperLock-incompatible] from ALTER PLUGGABLE DATABASE .. DATAFILE ALL ONLINE

Well, I tried searching for this error using ORA-600 look up tool, but it didn’t turn up anything and simply informed me:

An Error document for ORA-600 [kpdbgetoperlock-incompatible] is not registered with the tool.

Digging more in My Oracle Support pulled out following associated bug:

Bug 19329654 – ORA-600 [kpdbGetOperLock-incompatible] from ALTER PLUGGABLE DATABASE .. DATAFILE ALL ONLINE (Doc ID 19329654.8)

The good news was that the bug was fixed in the 12.1.0.2.1 (Oct 2014) Database Patch Set Update. And it’s true, after applying this PSU, everything was hunky-dory.

Categories: DBA Blogs

Monitoring Oracle Database with Zabbix

Gerger Consulting - Mon, 2016-03-14 08:14

Attend our free webinar and learn how you can use Zabbix, the open source monitoring solution, to monitor your Oracle Database instances? The webinar is presented by Oracle ACE and Certified Master Ronald Rood.


About the Webinar:

Enterprise IT is moving to the Cloud. With tens, hundreds even thousands of servers in the Cloud, monitoring the uptime, performance and quality of the Cloud infrastructure becomes a challenge that traditional monitoring tools struggle to solve. Enter Zabbix. Zabbix is a low footprint, low impact, open source monitoring tool that provides various notification types and integrates easily with your ticketing system. During the webinar, we'll cover the following topics:

  • Installation and configuration of Zabbix in the Cloud
  • Monitoring Oracle databases using Zabbix
  • How to use Zabbix templates to increase the quality and efficiency of your monitoring setup
  • How to setup Zabbix for large and remote networks
  • How to trigger events in Zabbix
  • Graphing with Zabbix
  • Categories: Development

    ORDS and PL/SQL

    Kris Rice - Mon, 2016-03-14 07:56
    Seems I've never posted about PL/SQL based REST endpoints other than using the OWA toolkit.  Doing the htp.p manually can give the control over every aspect of the results however there is an easier way. With PL/SQL based source types, the ins and outs can be used directly without any additional programming.  Here's a simple example of an anonymous block doing about as little as possible but

    Oracle Mobile Cloud Service Update (v1.2): New Features and Enhancements

    Oracle Mobile Cloud Service (MCS) provides the services you need to develop a comprehensive strategy for mobile app development and delivery. It provides everything you need to establish an...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    KeePass 2.32

    Tim Hall - Mon, 2016-03-14 06:33

    KeePass 2.32 has been released. You can download it from here.

    You can read about how I use KeePass and KeePassX2 on my Mac, Windows and Android devices here.

    Cheers

    Tim…

    KeePass 2.32 was first posted on March 14, 2016 at 12:33 pm.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    PeopleSoft on the Oracle Cloud – what does it mean?

    Duncan Davies - Mon, 2016-03-14 06:00

    There have been a few announcements over the last couple of weeks about the Oracle Public Cloud. But what does it actually mean for the PeopleSoft community?

    What is Oracle Public Cloud?PeopleSoft in the Oracle Cloud

    The Oracle Public Cloud is Oracle’s competitor to the Infrastructure as a Service (IaaS) providers that have swiftly risen to create a whole industry that didn’t exist 10 years ago. Because they’re the market leader (by far) everyone automatically thinks of Amazon, however Microsoft Azure, Google Compute and Rackspace are also players in the market.

    As PeopleSoft adopts more SaaS-like features (new UI, incremental updates etc) companies have started to move their infrastructure from their own data-centres to the cloud. For many companies this makes good business sense, however rather than have customers going to a 3rd party provider Oracle would rather provide the cloud service themselves. Obviously this is better for Oracle, however the customer benefits too (retaining a single vendor, and Oracle can potentially optimise their applications for their own cloud better than they can for Cloud infrastructure belonging to other vendors). There may also be cost savings for the customer, however I haven’t looked at pricing yet.

    Doesn’t Oracle already do Hosting?

    Yes, Oracle has long had a service that will host infrastructure on your behalf – Oracle On Demand. This is more of an older-style ASP (Application Service Provider). You’re more likely to be on physical hardware without much in the way of flexibility/scalability and tied into a long-term hosting contract, so the Oracle Public Cloud is a major step forwards in a number of ways.

    How will Oracle Public Cloud be better?

    I attended a couple of workshops on this last week and it looks very promising. It has all the attributes required for it to be properly classed as ‘Cloud’:

    • subscription pricing,
    • elasticity of resources (so you can scale instances according to demand),
    • resilience of data centres (so, if you’re based in the UK you might be looking at the Slough data centre, however there are two ‘availability zones’ within Slough so if one gets hit by an outage you’ll still be able to connect to the other one)

    Interestingly, it also includes several ‘Database as a Service’ offerings, each offering increasing levels of performance. With this model you don’t need to worry about the virtual machine, operation system etc that your database runs on, you receive access to a database and leave the maintenance to others. You would still need to have your other tiers on the IaaS offerings.

    This opens up the possibility of multiple tiers of Cloud service:

    1. Just the Infrastructure (client does all the database and application admin)
    2. DBaaS (client has other tiers on IaaS, but does not do DB admin)
    3. Full Cloud solution (uses Oracle Cloud and a partner to do all administration)
    How can I best take advantage?

    The best time to move is probably at the same time as an upgrade. Upgrades normally come with a change in some of the hardware (due to the supported platforms changing) so moving to the cloud allows the hardware to change without any up-front costs.

    PeopleSoft 9.2 and the more recent PeopleTools versions have a lot of features that were built for the Cloud, so by running it on-premises you’re not realising the full capabilities of your investment.

    We’d recommend you try using the Cloud for your Dev and Test instances first, before leaping in with Production at a later date. Oracle have tools to help you migrate on-premises instances to their Cloud. (At this point – Mar 2016 – we have not tested these tools.)

    What will the challenges be?

    The first challenge is “how do I try it?”. This is pretty straightforward, in that you get a partner to demonstrate to you, or can get yourself an Oracle Public Cloud account and then provision a PeopleSoft instance using one of the PUM images as a demo. This would work fine to look at new functionality, or as a conference room pilot.

    One of the biggest challenges is likely to be security – not the security of Oracle’s cloud, but securing your PeopleSoft instances which previously might have been only available within your corporate LAN. If you need assistance with this speak to a partner with experience using Oracle Public Cloud.


    Pages

    Subscribe to Oracle FAQ aggregator