What a cracking Oracle Midlands event!
The evening started with a session on “Designing Efficient SQL” by Jonathan Lewis. The first few slides prompted this tweet.
When someone asks me a question about SQL tuning my heart sinks. It’s part of my job and I can do it, but I find it really hard to communicate what I’m doing. Jonathan’s explanation during this session was probably the best one I’ve ever heard. Rather than trying to explain a million and one optimizer features, it’s very much focussed on a “What are you actually trying to achieve?” approach. It should be mandatory viewing for all Oracle folks.
After the break, where I stuffed myself with samosas, it was on to the lightning talks (10 mins each).
- Breaking Exadata - Jonathan Lewis, JL Computer Consultancy : This focused on a couple of situations where the horsepower of Exadata doesn’t come to the rescue, like large hash joins that flood to disk and decompressions in the storage cells being abandoned and the compressed blocks being sent back to the compute nodes to be decompressed. If I ever get to use an Exadata…
- How to rename a 500gb schema in 10 minutes - Richard Harrison, EON : Why can’t we have a rename user/schema command? Richard showed a quick way to use transportable tablespaces to rename a schema. Neat!
- Oracle Big Data Appliance – What’s in the box? - Salih Oztop, Business AnalytiX : The title says it all really. I thought it was a really good introduction to the BDA. I’ve been to 1 hour talks on this subject that didn’t convey as much information as he managed to fit into 10 minutes. Also, a hint at a cool new feature about to be announced…
- Installing RAC: Things to sort out with your systems and network admins - Patrick Hurley, Scale Abilities : Patrick is a cool guy and he upped his cool rating further by brandishing a light sabre as a pointer during his talk! His session was a list of gotchas he’s encountered while installing RAC. Some of them I’ve encountered myself. Some not. Good stuff.
- Is the optimiser too smart now? - Martin Widlake, ORA600 : I could hear a voice, but I couldn’t see anyone over the podium. :) The question was, has it got to a point where it is too complicated for normal folks and beginners to stand a chance at understanding it, or should we now be treating it like a black box? My own feeling is that 12c might be the turning point where I really have to say I don’t understand it any more. It feels a bit sad, but maybe it is inevitable…
I though the lightning talks worked really well. It felt like a whole conference packed into one hour.
Big thanks to Mike for organising it and to all the speakers for doing a great job. The next event will be up on the website soon. Please show your support! These things live or die based on your participation…
Tim…Oracle Midlands : Event #4 – Summary was first posted on July 15, 2014 at 10:51 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
While I was at BGOUG I went for swim each morning before the conference. That got me to thinking, perhaps I should start swimming again…
It’s been 4 weeks since I got back from the conference and I’ve been swimming very morning. It was a bit of a struggle at first. I think it took me 2-3 days to work up to a mile (1600M – about 9M short of a real mile). Since then I’ve been doing a mile each day and it’s going pretty well.
I’m pretty much an upper body swimmer it the moment. I kick my legs just enough to keep them from sinking, but don’t really generate any forward thrust with them. At this point I’m concentrating on my upper body form. When I think about it, my form is pretty good. When I get distracted, like when I am having to pass people, it breaks down a little. I guess you could say I am in state of “concious competence“. Over the next few weeks this should set in a bit and I can start working on some other stuff. It’s pointless to care too much about speed at this point because if my form breaks down I end up having a faster arm turnover, but use more effort and actually swim slower. The mantra is form, form, form!
Breathing is surprisingly good. I spent years as a left side breather (every 4th stroke). During my last bout of swimming (2003-2008) I forced myself to switch to bilateral breathing, but still felt the left side was more natural. Having had a 6 year break, I’ve come back and both sides feel about the same. If anything, I would say my right side technique is slightly better than my left. Occasionally I will throw in a length of left-only or right-only (every 4th stroke) breathing for the hell of it, but at the moment every 3rd stroke is about the best option for me. As I get fitter I will start playing with things like every 5th stroke and lengths of no breathing just to add a bit of variety.
Turns are generally going pretty well. Most of the time I’m fine. About 1 in 20 I judge the distance wrong and end up having a really flimsy push off. I’m sure my judgement will improve over time.
At this point I’m taking about 33 minutes to complete a mile. The world record for 1500M short course (25M pool) is 14:10. My first goal is to get my 1600M time down to double the 1500M world record. Taking 5 minutes off my time seems like quite a big challenge, but I’m sure as I bring my legs into play and my technique improves my speed will increase significantly.
As I get more into the swing of things I will probably incorporate a bit of interval training, like a sprint length, followed by 2-3 at a more sedate pace. That should improve my fitness quite a lot and hopefully improve my speed.
For a bit of fun I’ve added a couple of lengths of butterfly after I finish my main swim. I used to be quite good at butterfly, but at the moment I’m guessing the life guards think I’m having a fit. It would be nice to be able to bang out a few lengths of that and not feel like I was dying.
I don’t do breaststroke any more, as it’s not good for my hips. Doing backstroke in a pool with other people in the lane sucks, so I can’t be bothered with that. Maybe on days when the pool is quieter I will work on it a bit, but for now the main focus is crawl.
PS. I reserve the right to get bored, give up and eat cake instead at any time…Swimming Progress was first posted on July 12, 2014 at 9:59 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Virtual Technology Summit - Content is now OnDemand!
In this four track virtual event attendees had the opportunity to learn firsthand from Oracle ACEs, Java Champions, and Oracle product experts, as they shared their insight and expertise on Java, systems, database and middleware. A replay of the sessions is now available for your viewing.Architect Community
In addition to interviews with tech experts and community leaders, the OTN ArchBeat YouTube Channel also features technical videos, most pulled from various OTN online technical events. The following are the three most popular of those tech videos for the past seven days.
Debugging and Logging for Oracle ADF Applications
We're only human. Regardless how much work Oracle ADF does for us, or how powerful the JDeveloper IDE is, the inescapable truth is that as developers we will still make mistakes and introduce bugs into our ADF applications. In this video Oracle ADF Product Manager Chris Muir explores the sophisticated debugging tooling JDeveloper provides.
Developer Preview: Oracle WebLogic 12.1.3
Oracle WebLogic 12.1.3 includes some exciting developer-centric enhancements. IN this video Steve Button focuses on some of the more interesting updates around Java EE 7 features and examines how they will affect your development process.
Best Practices in Oracle ADF Development
In this video Frank Nimphius presents a brown-bag of ideas, hints and best practices that will help you to build better ADF applications.
"I always wanted to be somebody, but now I realize I should have been more specific." - Lily Tomlin
@Java RT @JDeveloper: Running Oracle ADF application High availability (HA)
Oracle DB Dev FaceBook Posts -
- Excellent article leading up to the Oracle #bigdatasql launch next week - 10 Ways to Get the Most Out of Big Data
- Great Blog here from good friend Maria Colgan on Oracle In-Memory and the Optimizer... Great Read!
Congratulations to the Winners #IoTDevchallenge -
Oracle Technology Network and Oracle Academy are proud to announce the winners of the IoT Developer Challenge. All of them making the Internet of Things come true. And, of course, built with the Java platform at the center of Things. See who the winners are in this blog post - https://blogs.oracle.com/java/entry/announcing_the_iot_developer_challenge.
OS Tips and Tricks for Sysadmins - This three-session track, part of the Global OTN Virtual Technology Summits; Americas July 9th, EMEA July 10th and APAC July 16th, will show you how to configure Oracle Linux to run Oracle Database 11g and 12c, how to use the latest networking capabilities in Oracle Solaris 11, and how to troubleshoot networking problems in Unix and Linux systems. Experts will be on hand to answer your questions live. Register now.
Disaster Recovery with Oracle Data Guard and Oracle GoldenGate -
The best part about preparing for the upcoming OTN Virtual Technology Summit is reading up on the technology we'll be presenting. Today's reading: Disaster recovery with Oracle Data Guard... it's an essential capability that every Oracle DBA should master.
Community blogs and social networks have been buzzing about the recent release of Oracle SOA Suite 12c, Oracle Mobile Application Foundation, and other new stuff. I've shared links to several such posts over the past several days on the OTN ArchBeat Facebook page. The three items below drew the most attention.
SOA Suite 12c: Exploring Dependencies - Visualizing dependencies between SOA artifacts | Lucas Jellema
Oracle ACE Director Lucas Jellema explores the use of the Dependency Explorer in JDeveloper 12c for tracking and visualizing dependencies in artifacts in SOA composites or Service Bus projects.
Managing Files for the Hybrid Cloud Use Cases, Challenges and Requirements | Dave Berry
This paper by Dave Berry, Vikas Anand, and Mala Ramakrishnan discusses Oracle Managed File transfer and best practices for sharing files within your enterprise and externally for partners and cloud services.
Say hello to the new Oracle Mobile Application Framework | Shay Shmeltzer
What's the Oracle Mobile Application Framework (MAF)? Oracle MAF, available as an extension to both JDeveloper and Eclipse, lets you develop a single application that will run on both iOS and Android devices. MAF is based on Oracle ADF Mobile, but adds many new features. Want more information? Click the link to read a post by product manager Shay Shmeltzer.
On July 4th Americans will celebrate the US victory over the British in the Revolutionary War by grilling mountains of meat, consuming mass quantities of beer, and making trips to the emergency room to reattach fingers blown off with poorly-handled fireworks. This hilarious video featuring comic actor Stephen Merchant offers a UK perspective on the outcome of that war.
A tip of a three-cornered hat to Oracle ACE Director Mark Rittman and Oracle Enterprise Architect Andrew Bond for bringing this video to my attention.
The release of SOA Suite 12c sees the addition of a Coherence Adapter to the list of Technology Adapters that are licensed with the SOA Suite. In this entry I provide an introduction to configuring the adapter and using the different operations it supports.
The Coherence Adapter provides access to Oracles Coherence Data Grid. The adapter provides access to the cache capabilities of the grid, it does not currently support the many other features of the grid such as entry processors – more on this at the end of the blog.
Previously if you wanted to use Coherence from within SOA Suite you either used the built in caching capability of OSB or resorted to writing Java code wrapped as a Spring component. The new adapter significantly simplifies simple cache access operations.Configuration
When creating a SOA domain the Coherence adapter is shipped with a very basic configuration that you will probably want to enhance to support real requirements. In this section I look at the configuration required to use Coherence adapter in the real world.Activate Adapter
The Coherence Adapter is not targeted at the SOA server by default, so this targeting needs to be performed from within the WebLogic console before the adapter can be used.
Create a cache configuration file
The Coherence Adapter provides a default connection factory to connect to an out-of-box Coherence cache and also a cache called adapter-local. This is helpful as an example but it is good practice to only have a single type of object within a Coherence cache, so we will need more than one. Without having multiple caches then it is hard to clean out all the objects of a particular type. Having multiple caches also allows us to specify different properties for each cache. The following is a sample cache configuration file used in the example.
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
This defines a single cache called TestCache. This is a distributed cache, meaning that the entries in the cache will distributed across the grid. This enables you to scale the storage capacity of the grid by adding more servers. Additional caches can be added to this configuration file by adding additional <cache-mapping> elements.
The cache configuration file is reference by the adapter connection factory and so needs to be on a file system accessed by all servers running the Coherence Adapter. It is not referenced from the composite.Create a Coherence Adapter Connection Factory
We find the correct cache configuration by using a Coherence Adapter connection factory. The adapter ships with a few sample connection factories but we will create new one. To create a new connection factory we do the following:
- On the Outbound Connection Pools tab of the Coherence Adapter deployment we select New to create the adapter.
- Choose the javax.resource.cci.ConnectionFactory group.
- Provide a JNDI name, although you can use any name something along the lines of eis/Coherence/Test is a good practice (EIS tells us this an adapter JNDI, Coherence tells us it is the Coherence Adapter, and then we can identify which adapter configuration we are using).
- If requested to create a Plan.xml then make sure that you save it in a location available to all servers.
- From the outbound connection pool tab select your new connection factory so that you can configure it from the properties tab.
- Set the CacheConfigLocation to point to the cache configuration file created in the previous section.
- Set the ClassLoaderMode to CUSTOM.
- Set the ServiceName to the name of the service used by your cache in the cache configuration file created in the previous section.
- Set the WLSExtendProxy to false unless your cache configuration file is using an extend proxy.
- If you plan on using POJOs (Plain Old Java Objects) with the adapter rather than XML then you need to point the PojoJarFile at the location of a jar file containing your POJOs.
- Make sure to press enter in each field after entering your data. Remember to save your changes when done.
You may will need to stop and restart the adapter to get it to recognize the new connection factory.Operations
To demonstrate the different operations I created a WSDL with the following operations:
- put – put an object into the cache with a given key value.
- get – retrieve an object from the cache by key value.
- remove – delete an object from the cache by key value.
- list – retrieve all the objects in the cache.
- listKeys – retrieve all the keys of the objects in the cache.
- removeAll – remove all the objects from the cache.
I created a composite based on this WSDL that calls a different adapter reference for each operation. Details on configuring the adapter within a composite are provided in the Configuring the Coherence Adapter section of the documentation.
I used a Mediator to map the input WSDL operations to the individual adapter references.
The input schema is shown below.
This type of pattern is likely to be used in all XML types stored in a Coherence cache. The XMLCacheKey element represents the cache key, in this schema it is a string, but could be another primitive type. The other fields in the cached object are represented by a single XMLCacheContent field, but in a real example you are likely to have multiple fields at this level. Wrapper elements are provided for lists of elements (XMLCacheEntryList) and lists of cache keys (XMLCacheEntryKeyList). XMLEmpty is used for operation that don’t require an input.Put Operation
The put operation takes an XMLCacheEntry as input and passes this straight through to the adapter. The XMLCacheKey element in the entry is also assigned to the jca.coherence.key property. This sets the key for the cached entry. The adapter also supports automatically generating a key, which is useful if you don’t have a convenient field in the cached entity. The cache key is always returned as the output of this operation.
The get operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be retrieved.
The remove operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be deleted.
This is similar to the remove operation but instead of using a key as input to the remove operation it uses a filter. The filter could be overridden by using the jca.coherence.filter property but for this operation it was permanently set in the adapter wizard to be the following query:
key() != ""
This selects all objects whose key is not equal to the empty string. All objects should have a key so this query should select all objects for deletion.
Note that there appears to be a bug in the return value. The return value is entry rather than having the expected RemoveResponse element with a Count child element. Note the documentation states that
When using a filter for a Remove operation, the Coherence Adapter does not report the count of entries affected by the remove operation, regardless of whether the remove operation is successful.
When using a key to remove a specific entry, the Coherence Adapter does report the count, which is always 1 if a Coherence Remove operation is successful.
Although this could be interpreted as meaning an empty part is returned, an empty part is a violation of the WSDL contract.
The list operation takes no input and returns the result list returned by the adapter. The adapter also supports querying using a filter. This filter is essentially the where clause of a Coherence Query Language statement. When using XML types as cached entities then only the key() field can be tested, for example using a clause such as:
key() LIKE “Key%1”
This filter would match all entries whose key starts with “Key” and ends with “1”.
The listKeys operation is essentially the same as the list operation except that only the keys are returned rather than the whole object.
To test the composite I used the new 12c Test Suite wizard to create a number of test suites. The test suites should be executed in the following order:
- CleanupTestSuite has a single test that removes all the entries from the cache used by this composite.
- InitTestSuite has 3 tests that insert a single record into the cache. The returned key is validated against the expected value.
- MainTestSuite has 5 tests that list the elements and keys in the cache and retrieve individual inserted elements. This tests that the items inserted in the previous test are actually in the cache. It also tests the get, list and listAll operations and makes sure they return the expected results.
- RemoveTestSuite has a single test that removes an element from the cache and tests that the count of removed elements is 1.
- ValidateRemoveTestSuite is similar to MainTestSuite but verifies that the element removed by the previous test suite has actually been removed.
One example of using the Coherence Adapter is to create a shared memory region that allows SOA composites to share information. An example of this is provided by Lucas Jellema in his blog entry First Steps with the Coherence Adapter to create cross instance state memory.
However there is a problem in creating global variables that can be updated by multiple instances at the same time. In this case the get and put operations provided by the Coherence adapter support a last write wins model. This can be avoided in Coherence by using an Entry Processor to update the entry in the cache, but currently entry processors are not supported by the Coherence Adapter. In this case it is still necessary to use Java to invoke the entry processor.Sample Code
The sample code I refer to above is available for download and consists of two JDeveloper projects, one with the cache config file and the other with the Coherence composite.
- CoherenceConfig has the cache config file that must be referenced by the connection factory properties.
- CoherenceSOA has a composite that supports the WSDL introduced at the start of this blog along with the test cases mentioned at the end of the blog.
The Coherence Adapter is a really exciting new addition to the SOA developers toolkit, hopefully this article will help you make use of it.
Happy Friday! You've probably seen some notices, but can't forget to remind you to register for the the upcoming Virtual Technology Summits coming up July 9th, 10th and 16th. Something for everyone! Learn more here.
Kscope 2014 is now history. So can you think of a better time to watch the three most popular Kscope14 preview interviews from the OTN ArchBeat YouTube channel? Ah, the memories...
Stewart Bryson on OBIEE, ODI
Oracle ACE Director RedPill Analytics co-founder Stewart Bryson and talks about Oracle Business Intelligence, Oracle Data Integrator, and Oracle GoldenGate.
Tim Tow on Java Essbase API and ODTUG Community Service Day
Tim Tow, an Oracle ACE Director, founder and president of Applied OLAP, and an ODTUB board member, previews his four Kscope 2014 sessions and talks about ODTUG Community Service Day.
Shay Shmeltzer on Mobile App Development with Oracle ADF Mobile
Oracle Development Tools Director of Product Management Shay Shmeltzer talks about mobile application development and Oracle ADF, the core subjects of his presentations as part of the ADF and Fusion Development track at ODTUG KScope.
For you World Cup fans, here's a little something of questionable taste from Triumph, the Insult Comic Dog.
Helping Your Compiler Handle the Size of Your Constants - If you type the constants in your code incorrectly, your compiler will return an error. Darryl Gove explains why, and how to avoid the problem.
Playing with ZFS Shadow Migration- If you need to migrate data from a server running Oracle Solaris 10 or 11 to one running Oracle Solaris 11.1, use Shadow Migration. It's easy, and allows you to migrate shared ZFS, UFS, or VxFS (Symantec) file systems through NFS or even through a local file system. Alexandre shows how.
Register for the Virtual Technology Summits taking place July 9th (Americas 9am to 1pm PT), 10th (EMEA 9am to 1pm BST) and 16th (APAC 10am to 2pm IST) and be blown away by the Database - Mastering Oracle Database Management & Development Techniques track content - In this track Oracle ACEs and product team experts will present advanced features and management methods that will help you master your Oracle Database capabilities and drive greater performance, agility and manageability of your IT implementation. This track will build upon your skills with data management, migration, and performance.
Laura Ramsey, OTN Database Community Manager, has just launched the OTN DBA/DEV Watercooler. This blog is your official source of news covering Oracle Database technology topics and community activities from throughout the OTN Database and Developer Community. Find tips and in-depth technology information you need to master Oracle Database Administration or Application Development here. This Blog is compiled by @oracledbdev, the Oracle Database Community Manager for OTN, and features insights, tech tips and news from throughout the OTN Database Community.
Find out more about what you might hear around the OTN DBA/DEV Watercooler in Laura's inaugural post.
Read more here about the PRESS RELEASE: Oracle Delivers Latest Release of Oracle Enterprise Manager 12c
Service Catalog for Database and Middleware as a Service; Enhanced
Database and Middleware Management Help Drive Enterprise-Scale Private
In coming weeks , i will be covering latest topics like :
- DbaaS Service Catalog incorporating High Availability and Disaster Recovery
- New Rapid Start kit
- Other new Features
Stay Tuned !
Interesting info-graphics on Data-center / DB-Manageability
One of the key task that a DBA performs repeatedly is Provisioning of Databases which also happens to one of the top 10 Database Challenges as per IOUG Survey .
Most of the challenge comes in form of either Lack of Standardization or it being a Long and Error Prone Process . This is where Enterprise Manager 12c can help by making this a standardized process using profiles and lock-downs ; plus have a role and access separation where lead dba can lock certain properties of database (like character-set or Oracle Home location or SGA etc) and junior DBA's can't change those during provisioning .Below image describes the solution :
In Short :
- Its Fast
- Its Easy
- And you have complete control over the lifecycle of your dev and production resources.
I actually wanted to show step by step details on how to provision a 11204 RAC using Provisioning feature of DBLM , but today i saw a great post by MaaZ Anjum that does the same , so i am going to refer you to his blog here :
Other Resources :
Official Doc : http://docs.oracle.com/cd/E24628_01/em.121/e27046/prov_db_overview.htm#CJAJCIDA
Screen Watch : https://apex.oracle.com/pls/apex/f?p=44785:24:112210352584821::NO:24:P24_CONTENT_ID%2CP24_PREV_PAGE:5776%2C1
Others : http://www.oracle.com/technetwork/oem/lifecycle-mgmt-495331.html?ssSourceSiteId=ocomen
Gee, that didn’t work.
For those of you wondering about the title of this post, I’m referring to the brew package manager for Mac OS — a nice utility for installing Unix-like packages on Mac OS similar to how yum / apt-get can be used on Linux.
I particularly like the way brew uses /usr/local and symlinks for clean installations of software without messing up the standard Mac paths.
Unfortunately, there isn’t a brew “formula” for installing sqlplus and the instant client libraries (and probably never will be due to licensing restrictions), but we can come close using ideas from Oracle ACE Ronald Rood and his blog post Oracle Client 11gR2 (22.214.171.124) for Apple Mac OS X (Intel).
Go there now and read up through “unzipping the files” — after that, return here and we’ll see how to simulate a brew installation.
organize the software
mkdir -p /usr/local/Oracle/product/instantclient/126.96.36.199.0/bin mkdir -p /usr/local/Oracle/product/instantclient/188.8.131.52.0/lib mkdir -p /usr/local/Oracle/product/instantclient/184.108.40.206.0/jdbc/lib mkdir -p /usr/local/Oracle/product/instantclient/220.127.116.11.0/rdbms/jlib mkdir -p /usr/local/Oracle/product/instantclient/18.104.22.168.0/sqlplus/admin
Change to the instantclient_11_2 directory where the files were extracted, and execute the following commands to place them into our newly created directories:
mv ojdbc* /usr/local/Oracle/product/instantclient/22.214.171.124.0/jdbc/lib/ mv x*.jar /usr/local/Oracle/product/instantclient/126.96.36.199.0/rdbms/jlib/ mv glogin.sql /usr/local/Oracle/product/instantclient/188.8.131.52.0/sqlplus/admin/ mv *dylib* /usr/local/Oracle/product/instantclient/184.108.40.206.0/lib/ mv *README /usr/local/Oracle/product/instantclient/220.127.116.11.0/ mv * /usr/local/Oracle/product/instantclient/18.104.22.168.0/bin/
While these commands place the files where we want them, we’ll need to do a few more things to make them usable. If you’re using brew already, /usr/local/bin will be in your PATH and you won’t need to add it. We’ll mimic what brew does and symlink sqlplus into /usr/local/bin.
cd /usr/local/bin ln -s ../Oracle/product/instantclient/22.214.171.124.0/bin/sqlplus sqlplus
This will put sqlplus on our path, but we still need to set the environment variables for things like ORACLE_BASE, ORACLE_HOME and the DYLD_LIBRARY_PATH. Ronald sets them manually and then adds them to his .bash_profile, but I wanted to mimic some of the brew packages and have a .sh file to set variables from /usr/local/share.
To do so, I created another directory underneath /usr/local/Oracle to hold my .sh file:
cd /usr/local/Oracle/product/instantclient/126.96.36.199.0 mkdir -p share/instantclient cd /usr/local/share ln -s ../Oracle/product/instantclient/188.8.131.52.0/share/instantclient/ instantclient
Now I can create an instantclient.sh file and place it in /usr/local/Oracle/product/instantclient/184.108.40.206.0/share/instantclient/ with the content I want in my environment.
$ cat /usr/local/share/instantclient/instantclient.sh export ORACLE_BASE=/usr/local/Oracle export ORACLE_HOME=$ORACLE_BASE/product/instantclient/220.127.116.11.0 export DYLD_LIBRARY_PATH=$ORACLE_HOME/lib export TNS_ADMIN=$ORACLE_BASE/admin/network
Once I have this file in place, I can edit my .bash_profile file and add the following line:
Open up a new Terminal window and voila! A working sqlplus installation that mimics a brew package install!
Nationwide Deploys Database Applications 600% Faster
Heath Carfrey of Nationwide, a leading global insurance and
financial services organization, discusses how Nationwide saves time and
effort in database provisioning with Oracle Enterprise Manager.
- Provisioning Databases using Profiles (aka Gold Images)
- Automated Patching
- Config/Compliance tracking
A quick note on how to install EMCLI which is used for various CLI operations from EM . I was looking to test some Database provisioning automation via EMCLI and thus was looking to setup the same .
To set up EMCLI on the host, follow these steps:
1. Download the emcliadvancedkit.jar from the OMS using URL https://<omshost>:<omsport>/em/public_lib_download/emcli/kit/emcliadvancedkit.jar
2. Set your JAVA_HOME environment variable and ensure that it is part of your PATH. You must be running Java 1.6.0_43 or greater. For example:
o setenv JAVA_HOME /usr/local/packages/j2sdk
o setenv PATH $JAVA_HOME/bin:$PATH
3. You can install the EMCLI with scripting option in any directory either on the same machine on which the OMS is running or on any machine on your network (download the emcliadvancedkit.jar to that machine)
java -jar emcliadvancedkit.jar client -install_dir=<emcli client dir>
4. Run emcli help sync from the EMCLI Home (the directory where you have installed emcli) for instructions on how to use the "sync" verb to configure the client for a particular OMS.
5. Navigate to the Setup menu then the Command Line Interface. See the Enterprise Manager Command Line Tools Download page for details on setting EMCLI.
Webcast: Database Cloning in Minutes using Oracle Enterprise Manager 12c Database as a Service Snap Clone
Since the demands
from the business for IT services is non-stop, creating copies of production
databases in order to develop, test and deploy new applications can be
labor intensive and time consuming. Users may also need to preserve private
copies of the database, so that they can go back to a point prior to when
a change was made in order to diagnose potential issues. Using Snap Clone,
users can create multiple snapshots of the database and “time
travel” across these snapshots to access data from any point
Join us for an in-depth
technical webcast and learn how Oracle Cloud Management Pack for Oracle
Database's capability called Snap Clone, can fundamentally improve the
efficiency and agility of administrators and QA Engineers while saving
CAPEX on storage. Benefits include:
- Agile provisioning
(~ 2 minutes to provision a 1 TB database)
- Over 90% storage
- Reduced administrative
overhead from integrated lifecycle management
April 24 — 10:00 a.m. PT | 1:00 p.m. ET
May 8 — 7:00 a.m. PT | 10:00 a.m. ET | 4:00 p.m. CET
May 22 — 10:00 a.m. PT | 1:00 p.m. ET
Found a very good paper: http://research.microsoft.com/pubs/204499/a20-appuswamy.pdf
This paper discuss if it is a right approach of using Hadoop as the analytics infrastructure.
It is hard to argue with the industry trend. However, Hadoop is not
new any more. It is time for people to calm down and rethink about the
Thank you for visiting. This blog has been closed down and merged with the WebCenter Blog, which contains blog posts and other information about ECM, WebCenter Content, the content-enabling of business applications and other relevant topics. Please be sure to visit and bookmark https://blogs.oracle.com/webcenter/ and subscribe to stay informed about these topics and many more. From there, use the #ECM hashtag to narrow your focus to topics that are strictly related to ECM.
See you there!
A nice little feature in Oracle Database 12c is to query patching information via SQL. You can do this from SQLPlus or any other SQL interface jdbc/odbc etc. You can find more details here
However you won't be surprised to find that the following query doesn't currently return any useful information.
SYS@//oracle12c/orcl > select DBMS_QOPATCH.GET_OPATCH_LIST from dual; GET_OPATCH_LIST ------------------------------------------------------------------------------------------------------------------------ <patches/>
I’ve been using the very useful scripts from FlashDBA to run SLOB2 on our new system, but unfortunately the analyze one is not RAC aware, so I’ve modified it, in very minor ways, such that it can use an AWR Global report (
awrgrpt.sql) as input and still extract the same values that the original does.
I call the script slob2-rac-analyze.sh
Here is an example run – ignore the numbers as they are not representative of anything in particular.
a555.net(jeff.a1):/app/support/SLOB: ./slob2-rac-analyze.sh rac_awr_12jul2013/awr.20.032/awr.20.032.txt > slob.csv
Info : Analyzing file rac_awr_12jul2013/awr.20.032/awr.20.032.txt
Info : Filename = awr.20.032.txt
Info : Update Pct = 20
Info : Workers = 032
Info : Read IOPS = 85.8
Info : Write IOPS = 33.0
Info : Redo IOPS = 15.6
Info : Total IOPS = 134.4
Info : Read Num Waits = 712
Info : Read Wait Time = 0.58
Info : Read Latency us = 814.606
Info : Write Num Waits = 926
Info : Write Wait Time = 0.28
Info : Write Latency us = 302.375
Info : Redo Num Waits = 2043
Info : Redo Wait Time = 0.37
Info : Redo Latency us = 181.106
Info : Num CPUs = 384
Info : Num CPU Cores = 192
Info : Num CPU Sockets = 24
Info : Linux Version = Red Hat Enterprise Linux Server release 6.3 (Santiago)
Info : Kernel Version = 2.6.32-279.2.1.el6.x86_64
Info : Processor Type = Intel(R) Xeon(R) CPU E7- 2830 @ 2.13GHz
Info : SLOB Run Time = 300
Info : SLOB Work Loop = 0
Info : SLOB Scale = 10000
Info : SLOB Work Unit = 256
Info : SLOB Redo Stress = LIGHT
Info : SLOB Shared Data Mod = 0
Info : No more files found
Info : =============================
Info : AWR Files Found = 1
Info : AWR Files Processed = 1
Info : Errors Experienced = 0
Info : =============================
Jonathan Lewis has a nice article covering the different AWR Reports.
I’ve only tested it on the system at work and it seems to work OK – your mileage may vary and I’d be happy to hear comments to the contrary, in relation to the changes I’ve made for use on RAC, but obviously the script is still 99% unchanged, so please contact FlashDBA if there are any generic issues you want to raise.
I’m not a unix shell script guy, but it seems to work…see what you think.