Feed aggregator

Disable flashback archive

Tom Kyte - Wed, 2016-09-14 00:46
For lower environment , we obfuscate data by updating. Before obfuscation, I want to turn off flashback archive so that it tablespace would not run out of space. How do I turn off flashback archive and turn it back again? Thanks.
Categories: DBA Blogs

Query session show inactive but having long operation running

Tom Kyte - Wed, 2016-09-14 00:46
Why the query session showing inactive but the long operation still in progress and running ?
Categories: DBA Blogs

Oracle OpenWorld 2016 and Where You'll Find Me

Shay Shmeltzer - Tue, 2016-09-13 17:49

It's that time of the year - Oracle OpenWorld is taking place starting on Sunday - and my calendar is full of activities.

I'm going to be presenting on multiple tools and frameworks including sessions on Oracle Application Builder Cloud Service, Oracle JDeveloper and Oracle ADF, Oracle Developer Cloud Service and a session discussing which dev framework and tool is right for you. 

In case you want to catch me at #OOW16 here is my schedule:

Simplified Multichannel App Development for Business Users [CON2884] 
Monday, Sep 19, 1:45 p.m. | Moscone West - 2005 - A session where I'll demo how easy it is to create and host your own applications with Oracle Application Builder Cloud Service.

Oracle Application Development Framework and Oracle JDeveloper: What’s New [CON1226]
Tuesday, Sep 20, 4:00 p.m. | Moscone West - 2018 - A quick review of the new features we added in the 12.2.* releases of JDeveloper and ADF

Oracle Development Tools and Frameworks: Which One Is Right for You? [MTE6650]
Tuesday, Sep 20, 6:15 p.m. | Moscone South - 301- A session for all of those who are not sure which technology is right for them, or for those who want to ask me "is Oracle [fill in the product name] dead?"

A Guide to Cloud-Based Agile Development Methodology Adoption [CON1947]
Wednesday, Sep 21, 12:15 p.m. | Moscone West - 2018 - A demo focused session that show cases how Oracle Developer Cloud Service helps your team adopt agile development. 

No Code Required: Application Development and Publishing Made Easy [HOL7402]
Tuesday, Sep 20, 11:30 a.m. | Hotel Nikko - Nikko Ballroom III (3rd Floor)
Monday, Sep 19, 4:15 p.m. | Hotel Nikko - Nikko Ballroom III (3rd Floor) - Your two chances to try out  the new Oracle Application Builder Cloud Service and develop your first app

Agile Development Management and Continuous Integration Simplified [HOL7403]
Wednesday, Sep 21, 8:00 a.m. | Hotel Nikko - Nikko Ballroom III (3rd Floor) - Your chance to manage a whole development team agile process using Oracle Developer Cloud Service

I'm also going to be in the mobile theater in the mobile area in the demo ground on Tue and Wed at 10:30 doing a quick demo of ABCS and its mobile capabilities.

In between these sessions, you'll be able to find me at the Oracle Demoground doing some shifts in the Oracle ADF booth (which is in Moscone South far left corner) - the rest of our pods are close by including JET, DevCS, ABCS and even Forms :-)

And if I'll have any spare time, I'll try and catch some of the other session on this list of Dev tools and framework sessions

See you next week.

oowSF

Categories: Development

Game On! Let's Learn and Play at #OOW16

WebCenter Team - Tue, 2016-09-13 17:13
Normal 0 false false false EN-US X-NONE X-NONE

Are you ready to get your game on? Download the Oracle Mobile Challenge Game (powered by our very own Oracle Mobile Cloud Service), Attend Mobile and Content and Experience Sessions to score points (think mobile beacons!) and you may win some serious Daily Prizes*!!! And we mean - some good stuff, thanks to our partner Samsung! Take a look:


Here are some helpful clues on our #OOW16 Digital Experience product strategy sessions that you can’t afford to miss:

Monday, Sep 19, 2016

  • Content and Experience Management: Roadmap and Vision  [CON7256]
    11:00 a.m. – 11:45 a.m. | Moscone West—2014
    David Le Strat, Sr. Director, Product Management, Oracle
  • Content Management in the Cloud: Strategy & Roadmap [CON7257]
    12:30 p.m. – 1:15 p.m. | Moscone West—2014
    Thyaga Vasudevan, Senior Director, Product Management, Oracle

    Edi Piovezani, Diretor de Tecnologia, Omni Financeira
    Daniel Martins, Programmer / Developer, Accurate Software Ltda
  • Digital Experience in the Cloud: Strategy & Roadmap [CON7258]
    1:45 p.m. – 2:30 p.m. | Moscone West—3000

    Igor Polyakov, Sr Principal Product Manager, Oracle

    Mariam Tariq, Sr Director Product Management, Oracle

Tuesday, Sep 20, 2016

  • WebCenter Content, Imaging, Capture & Forms Recognition: Roadmap & Strategy [CON7259]
    4:00 p.m. – 4:45 p.m. | Moscone West—2014
    Jessika Acosta, Senior Manager WebCenter Technology, McKesson Medical-Surgical

    Marcus Diaz, WebCenter Sr. Principal Product Manager, Oracle

  • Oracle WebCenter Digital Experience: Sites & Portal Strategy and Roadmap [CON7260]
    5:15 p.m. – 6:00 p.m. | Moscone West—2014

    Mariam Tariq, Sr Director Product Management, Oracle

    Valbona Lachapelle, TIAA

So, get to our recently launched website at https://sites.oracle.com/oowdx (powered by Oracle Sites Cloud Service and Oracle Documents Cloud Service) , get information on sessions, download the Oracle Mobile Challenge app and plan your mobile challenge strategy!

Who says you can’t learn and play at the same time? See you at #OOW16!

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

VirtualBox 5.1.6

Tim Hall - Tue, 2016-09-13 17:12

VirtualBox 5.1.6 has been born.

Downloads and changelog are in the usual places. It’s a maintenance release, so there are a bunch of bug fixes in there.

I’ve installed it on my Mac with no problems. It might be a couple of weeks before I do the install on my work PC or my Linux servers.

Happy upgrading!

Cheers

Tim…

VirtualBox 5.1.6 was first posted on September 13, 2016 at 11:12 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Before Investing in Data Archiving, Ask These Questions

Kubilay Çilkara - Tue, 2016-09-13 15:39
Data has a way of growing so quickly that it can appear unmanageable, but that is where data archiving shines through.  The tool of archiving helps users maintain a clear environment on their primary storage drives.  It is also an excellent way to keep backup and recovery spheres running smoothly.  Mistakes and disasters can and will happen, but with successful archiving, data can be easily recovered so as to successfully avert potential problems.  Another benefit to archiving data is that companies are actually saving themselves the very real possibility of a financial headache!  In the end, this tool will help to cut costs in relation to effectively storing and protecting your data.  Who doesn’t want that?  It’s a win-win situation for everyone involved!

When It Comes to Archiving Data for Storage, Where Do I Begin?
It may feel stressful at first to implement a data archiving plan into your backup and storage sphere, but don’t be worried!  To break it down, data simply needs to go from one place into another.  In other words, you are just moving data!  Sounds easy enough, right?  If things get sticky however, software is available on the market that can help users make better decisions and wise choices when it comes to moving your data.  In your preparation to incorporate archiving, you must ask yourself some important questions before you start.  This way, companies will be guaranteed to be making choices that meet the demands of their particular IT environment.  So, don’t stress-just be well prepared and ask the right questions!  Right now, you may be wondering what those questions are.  Below you will see a few of them that are most necessary to ask in order to guarantee your archiving success. 

What You Should Be Asking: 

1.      With respect to your data, what type is that you need to store?  “Cold” data may not be something you are familiar with, but this is a term used regarding data that has been left untouched for over 6 months.  If this is the type of data you are trying to store, good for you!  This particular type of data must be kept secure in the event that it is needed for compliance audits or even for a possible legal situation that could come up down the road.  At any rate, all data -regardless of why it is being stored- must be taken care of appropriately so as to ensure security and availability over time.  You can be assured you have invested in a good archiving system if it is able to show you who viewed the data and at what time the data was viewed. 

2.     Are you aware of the particular technology you are using?  It is essential to realize that data storage is highly dependent upon two factors:  the hardware and the media you are using.  Keep in mind that interfaces are a very important factor as well, and they must be upgraded from time to time.  A point to consider when storing data is that tape has an incredibly long life, and with low usage and proper storage it could potentially last for 100 years!  Wow!  However, there are arguments over which has the potential to last longer...tape or hard disk.  Opponents towards the long life of tape argue that with proper powering down of drives, hard disk will actually outlive tape.  Here is where the rubber meets the road to this theory.  It has shown to be problematic over time, and can leave users with DUDL (Data Unavailable Data Loss).  This problem IS as terrible as it sounds.  Regardless of the fact that SSD lack mechanical parts, their electronic functions results in cell degradation, ultimately causing burnout.  It is unknown, even to vendors, what the SSD life is, but most say 3 years is what they are guaranteed for.  Bear in mind, as man-made creations, these tools will over time die.  Whatever your choice in technology, you must be sure to PLAN and TEST.  These two things are the most essential tasks to do in keeping data secure over time!

3.     Do you know the format of your data?  It is important to acknowledge that “bits” have to be well-kept in order to ensure that your stored data will be usable over the long run.  Investing in appropriate hardware to accurately read data as well as to interpret it is essential.  It is safe to say that investing in such a tool is an absolute MUST in order to ensure successful long-term storage!

4.     Is the archive located on the cloud or in your actual location?  Did you know this era is one in which storing your data on the cloud is a viable means of long-term archiving?  Crazy, isn’t it!  Truth be told, access to the cloud has been an incredibly helpful advancement when it comes to storage, but as all man-made inventions prove, problems are unavoidable.  Regarding cloud storage, at this point in time, the problems arise with respect to long-term retention.  However, many enjoy the simplicity of pay-as-you-go storage options available with the cloud.  Users also like relying upon particular providers to assist them with their cloud storage management.  In looking to their providers, people are given information such as what the type, age, and interface of their storage devices are.  You might be asking yourself why this is so appealing.  The answer is simple.  Users ultimately gain the comfort of knowing that they can access their data at anytime, and that it will always be accessible.   At this point, many are wondering what the downsides are of using the cloud for storage.  Obviously, data will grow over time, inevitably causing your cloud to grow which in turn will raise your cloud storage costs.  Users will be happy to know however, that cloud storage is actually a more frugal choice than opting to store data in a data center.  In the end, the most common compliment regarding cloud storage is the TIME involved in trying to access data in the event of a necessary recovery, restore, or a compliance issue.  Often times, these issues are quite sensitive with respect to timing, therefore data must be available quickly in case of a potential disaster.   

5.     What facts do you know about your particular data?  There is often sufficient knowledge of storage capacity, but many are not able to bring to mind how much data is stored per application.  Also, many do not know who the owner of particular data is, as well as what age the data may be.  The good news is that there is software on the market that helps administrators quickly figure these factors out.  This software produces reports with the above information by scanning environments.  With the help of these reports, it can be quickly understood what data is needed and in turn, will efficiently archive that data.  All it takes is a push of a button to get these clear-cut reports with all the appropriate information on them.  What could be better than that?
In Closing, Companies MUST Archive!


In a world that can be otherwise confusing, companies must consider the importance of archiving in order to make their data and storage management easier to grasp.  Maybe if users understood that archiving is essential in order to preserve the environment of their IT world, more people would quickly jump on board.  The best news is that archiving can occur automatically, taking the guesswork out of the process.  Archiving is a sure step to keeping systems performing well, budgets in line, and data available and accessible.  Be sure to prepare yourself before investing in data archiving by asking the vital questions laid out above.  This way, you can be certain you have chosen a system which will fully meet your department needs!

Writer Jason Zhang is the product marketing person for Rocket Software's Backup, Storage, and Cloud solutions.
Categories: DBA Blogs

Finding the Right Tools to Manage Data in Today’s Modern World

Kubilay Çilkara - Tue, 2016-09-13 15:37
A common concern among companies is if their resources are being used wisely.   Never before has this been more pressing than when considering data storage options.  Teams question the inherent worth of their data, and often fall into the trap of viewing it as an expense that weighs too heavily on the budget.  This is where wisdom is needed with respect to efficient resource use and the task of successful data storage.   Companies must ask themselves how storing particular data will benefit their business overall.  Incorporating a data storage plan into a business budget has certainly proven to be easier said than done.  Many companies fail at carrying out their desires to store data once they recognize the cost associated with the tools that are needed.  You may be wondering why the failure to follow through on these plans is so common.  After all, who wouldn’t want to budget in such an important part of company security?  The truth of the matter is that it can all be very overwhelming once the VAST amount of data that actually exists is considered, and it can be even more stressful to attempt to manage it all.

When considering what tools to use for management of one’s data, many administrators think about using either the cloud or virtualization.  Often times during their research, teams question the ability of these two places to successfully house their data.  Truth be told, both the cloud and virtualization are very reliable, but are equally limited with respect to actually managing data well.  What companies really need is a system that can effectively manage huge amounts of data, and place it into the right categories.  In doing so, the data becomes much more valuable. 

Companies must change their mindsets regarding their stored data.  It is time to put away the thoughts that data being stored is creating a money pit.  Many times, people adopt an “out with the old, in with the new” philosophy, and this is no different with respect to storing old data.  The truth is, teams eventually want new projects to work on, especially since some of the old data storage projects can appear to be high maintenance.  After all, old data generally needs lots of backups and archiving, and is actually quite needy.  These negative thoughts need to fade away however, especially in light of the fact that this data being stored and protected is a staple to company health and security.  Did you know that old and tired data actually holds the very heart of a company?  It is probable that many forget that fact, since they are so quick to criticize it!  Also notable is the fact that this data is the source of a chunk of any given business’s income.  Good to know, isn’t it?!  Poor management of data would very likely be eradicated if more teams would keep the value of this data on the forefront of their minds.  It is clear that the task of managing huge amounts of company data can be intimidating and overwhelming, but with the right attitude and proper focus, it is not impossible. 

The best news yet is that there are excellent TOOLS that exist in order to help companies take on the magnanimous task of managing company data.  In other words, administrators don’t have to go at it alone!  The resources out there will assist users with classifying and organizing data, which is a huge relief NOT to have to do manually.  When teams readily combine these tools with their understanding of business, they have a sure recipe for success when it comes to management.  To break it down even further, let’s use a more concrete example.  Imagine you are attempting to find out information about all the chemical engineers in the world.  You must ask yourself how you would go about doing this in the most efficient manner.  After all, you would clearly need to narrow down your search since there are more than 7 billion individuals that exist on this planet.  Obviously, you wouldn’t need to gather information on every single human being, as this would be a huge time waster.  On the contrary, to make things more streamlined, you would likely scope out and filter through various groups of people and categorize them by profession.  Maybe you would consider a simple search in engineering graduates.  The above example in organizing information is probably the method most people use to organize simple things in their life on a daily basis.  Keep in mind, having these tools to help streamline data is one thing, but one must also possess a good understanding of business plans, as this will assist for a better grasp on corporate data.  With these things in line, businesses can be assured that their data management systems will be in good hands and running smoothly. 

What a great gift to be alive during this time, with the easy access to modern tools which help make management of data much more understandable to those in charge.  These valuable resources also help to create confidence in administrators so that they feel well-equipped to navigate the sometimes harsh IT world.  For example, during the changes with data types and volumes, these tools assist in helping to lower the storage capacity needs, lower costs on backup and recovery, and aid in the passing of compliance with flying colors.  In fact, many in charge can rest their heads at night knowing their companies are totally in line with modern-day regulatory compliance rules. 


In the end, the value of wisdom with respect to making various decisions in an IT department is incredibly important.  Arguably one of the most areas which this applies to is in the sphere of data management.  The truth is, dependence upon tools alone to navigate the sometimes rough waters of the IT world will not be enough to get teams through.  As stated above, wisdom and the right attitude regarding data and its importance to company health and security are vital to proper management.  In addition, clients ought to be looking for resources to assist them with organizing and classifying their systems of files.  The point is clear:  intelligence paired with the proper tools will give companies exactly what they need for efficient and effective data management.  At the end of the day, users can rest easy with the knowledge that their data -which is the bread and butter of their companies- is in good hands.  What more could you ask for?

Writer Jason Zhang is the product marketing person for Rocket Software's Backup, Storage, and Cloud solutions.
Categories: DBA Blogs

ElasticSearch cluster using Docker Swarm mode 1.12

Marcelo Ochoa - Tue, 2016-09-13 14:30
Latest release of Docker project integrate a Swarm mode to manage your cluster environment.
The idea of this post is to show how to use Docker Swarm to deploy an ElasticSearch 5.0 cluster, to be similar to a production DataCenter I am using docker-machine to deploy six nodes of my Swarm cluster, similar deployment could be replicated in a public cloud such as AWS or DigitalOcean by simple change a parameter in docker-machine create command.
My six nodes where created using:
docker-machine create --driver virtualbox manager1
docker-machine create --driver virtualbox manager2
...
Then I changed some physical considerations about the cluster such DNS resolver and memory allocation:
VBoxManage modifyvm "manager1" --natdnshostresolver1 on --memory 1024
VBoxManage modifyvm "manager2" --natdnshostresolver1 on --memory 1024
...
VBoxManage modifyvm "worker4" --natdnshostresolver1 on --memory 2048
To start a cluster and init swarm the step are:
docker-machine start manager1
export MANAGER1_IP=$(docker-machine ip manager1)
docker-machine ssh manager1 docker swarm init --advertise-addr eth1
export MGR_TOKEN=$(docker-machine ssh manager1 docker swarm join-token manager -q)
export WRK_TOKEN=$(docker-machine ssh manager1 docker swarm join-token worker -q)
docker-machine start manager2
docker-machine ssh manager2 \
docker swarm join \
--token $MGR_TOKEN \
$MANAGER1_IP:2377
docker-machine start worker1
docker-machine ssh worker1 \
docker swarm join \
--token $WRK_TOKEN \
$MANAGER1_IP:2377
...
docker-machine start worker4
docker-machine ssh worker4 \
docker swarm join \
--token $WRK_TOKEN \
$MANAGER1_IP:2377
Back to manager1 console we can check the cluster status by using:
mochoa@localhost:~$ eval $(docker-machine env manager1)
mochoa@localhost:~$ docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
157fk6bsgck766kbmui2723b1 *  manager1  Ready   Active        Leader
588917vcd8eqlx2i26qr45wdl    manager2  Ready   Active        Reachable
bpeivuf6zmubjuhwczifo0o1u    worker1   Ready   Active      
btqhwp5ju82ydaj35tuwwmz7f    worker4   Ready   Active      
ct7lyum45a4cm4rtzftqtnrxp    worker3   Ready   Active      
duo3oykjte0hd0wr3nm8oj7c9    worker2   Ready   Active
       
More information about how to create a docker swarm cluster is available here.
Once the cluster is ready We have to deploy an ElasticSearch image locally modified from official one, here the steps:
mochoa@localhost:~/es$ cat Dockerfile
#name of container: elasticsearch/swarm
#versison of container: 5.0.0
FROM elasticsearch:5.0.0
MAINTAINER Marcelo Ochoa  "marcelo.ochoa@gmail.com"
COPY config/elasticsearch.yml /usr/share/elasticsearch/config
EXPOSE 9200 9300
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
mochoa@localhost:~/es$ cat config/elasticsearch.yml
network.host: ${HOSTNAME}
node.name: ${HOSTNAME}
# this value is required because we set "network.host"
# be sure to modify it appropriately for a production cluster deployment
discovery.zen.minimum_master_nodes: 1
mochoa@localhost:~/es$ eval $(docker-machine env manager1)
 mochoa@localhost:~/es$ docker build -t "elasticsearch/swarm:5.0.0" ..... mochoa@localhost:~/es$ eval $(docker-machine env worker4) mochoa@localhost:~/es$ docker build -t "elasticsearch/swarm:5.0.0" .
The idea of using a modified version of ElasticSearch image is to change the default config file to use ${HOSTAME} variable instead of the auto-discovery IP provided by ElasticSearch, this is important because Swarm deployment mode make a lot of stuff with the mappings address, see this page to find out more information of which IPs will be mapped when Docker service start.
If We see previous link each service deployed in a network will have an internal IP and a Virtual IP, hostname within the service resolve to the internal IP and not to the VIP associated to the service.
OK, we have the cluster started and the Docker image deployed on each node of the cluster, lets create an overlay network first to bound my ElasticSearch nodes.
mochoa@localhost:~/es$ docker network create -d overlay es_cluster
and finally deploy my services (master node, slave nodes and ingest nodes)
mochoa@localhost:~/es$ docker service create --network es_cluster --name es_master --constraint 'node.labels.type == es_master' --replicas=1 -p 9200:9200 -p 9300:9300 --env ES_JAVA_OPTS="-Xms1g -Xmx1g"  elasticsearch/swarm:5.0.0 -E bootstrap.ignore_system_bootstrap_checks=true -E cluster.name="ESCookBook" -E node.master=true -E node.data=false -E discovery.zen.ping.unicast.hosts=es_master mochoa@localhost:~/es$ docker service create --network es_cluster --name es_data --constraint 'node.labels.type == es_data' --replicas=2 --env ES_JAVA_OPTS="-Xms1g -Xmx1g" elasticsearch/swarm:5.0.0 -E bootstrap.ignore_system_bootstrap_checks=true -E cluster.name="ESCookBook" -E node.master=false -E node.data=true -E discovery.zen.ping.unicast.hosts=es_master mochoa@localhost:~/es$ docker service create --network es_cluster --name es_ingest --constraint 'node.labels.type == es_ingest' --replicas=1 --env ES_JAVA_OPTS="-Xms1g -Xmx1g" elasticsearch/swarm:5.0.0 -E bootstrap.ignore_system_bootstrap_checks=true -E cluster.name="ESCookBook" -E node.master=false -E node.data=false -E node.ingest=true -E discovery.zen.ping.unicast.hosts=es_master
Some point marked in italic/bold:
  • all services are attached to the overlay network es_cluster (--network es_cluster)
  • only service named es_master (1 replica) have mapped ElasticSearch ports (-p 9200:9200 -p 9300:9300)
  • es_master service are attached to nodes with label es_master, these nodes are marked specially to run ElasticSearch master nodes, in our deployment only nodes manager1 and manager2 are marked as node.labels.type == es_master
  • es_data (2 replicas) service have the parameters node.master-false, node.data=true and will run in nodes marked as es_data (worker1, worker2 and worker3)
  • es_ingest (1 replica) service have parameters node.master=false,node.data=false,node.ingest=true, these kind of ElasticSearch nodes are specially targeted to have a lot of CPU processing
A nice picture using manomarks/visualizer Docker image shows four ElasticSearch nodes up and running:

Docker services could be checked from command line using:
mochoa@localhost:~/es$ docker service ls
ID            NAME       REPLICAS  IMAGE                      COMMAND
1bofyw8lrcai  es_master  1/1       elasticsearch/swarm:5.0.0  -E bootstrap.ignore_system_bootstrap_checks=true -E cluster.name=ESCookBook -E node.master=true -E node.data=false -E discovery.zen.ping.unicast.hosts=es_master
8qh9why7q8zn  es_data    2/2       elasticsearch/swarm:5.0.0  -E bootstrap.ignore_system_bootstrap_checks=true -E cluster.name=ESCookBook -E node.master=false -E node.data=true -E discovery.zen.ping.unicast.hosts=es_master
97lc13vebfoz  es_ingest  1/1       elasticsearch/swarm:5.0.0  -E bootstrap.ignore_system_bootstrap_checks=true -E cluster.name=ESCookBook -E node.master=false -E node.data=false -E node.ingest=true -E discovery.zen.ping.unicast.hosts=es_master
If you want to see logs from ElasticSearch master node the command is:
mochoa@localhost:~/es$ docker logs -f es_master.1.8fuhvyy9scidk9fzmmme9k14l
[2016-09-13 18:09:57,680][INFO ][node                     ] [c251c9e143c1] initializing ...
......
[2016-09-13 18:11:07,974][INFO ][cluster.service          ] [c251c9e143c1] added {{9b4551cd2972}{Jt29mm9cQje18EEeP4pW_w}{LhBu2B9ETs6DB_uemvlKQQ}{10.0.1.6}{10.0.1.6:9300},}, reason: zen-disco-node-join[{9b4551cd2972}{Jt29mm9cQje18EEeP4pW_w}{LhBu2B9ETs6DB_uemvlKQQ}{10.0.1.6}{10.0.1.6:9300}]
[2016-09-13 18:11:08,626][INFO ][cluster.service          ] [c251c9e143c1] added {{a83d5e3a107d}{tpG4RN9dQ0OOiC7rOzmPiA}{r4aCN4S0Rk-Akqdfu1WEQg}{10.0.1.5}{10.0.1.5:9300},}, reason: zen-disco-node-join[{a83d5e3a107d}{tpG4RN9dQ0OOiC7rOzmPiA}{r4aCN4S0Rk-Akqdfu1WEQg}{10.0.1.5}{10.0.1.5:9300}]
[2016-09-13 18:11:58,287][INFO ][cluster.service          ] [c251c9e143c1] added {{aeacf654f679}{M7ymfkquRqmzHXa61zQbjQ}{pzJgqQHLSE6r9g1Ueh6rgQ}{10.0.1.8}{10.0.1.8:9300},}, reason: zen-disco-node-join[{aeacf654f679}{M7ymfkquRqmzHXa61zQbjQ}{pzJgqQHLSE6r9g1Ueh6rgQ}{10.0.1.8}{10.0.1.8:9300}]
Master node is running and three slave nodes where attached to the cluster, as We marked above two ports where published outside the cluster mapped to the master node, with these ports we can operate with our ElasticSearch deployment, for example:
mochoa@localhost:~/es$ curl http://192.168.99.100:9200/_nodes/process?pretty
{
  "_nodes" : {
    "total" : 4,
    "successful" : 4,
    "failed" : 0
  },
  "cluster_name" : "ESCookBook",
....
  }
}
Note the We are using 192.168.99.100 (manager1 IP) instead of the ElasticSearch master node internal IP (10.0.1.2).
Finally We can scale up and down our services according to our needs, for example if We need to upload a lot data to our ElasticSearch cluster we can scale ingest nodes to two:
mochoa@localhost:~/es$ docker service scale es_ingest=2
es_ingest scaled to 2
mochoa@localhost:~/es$ docker logs -f es_master.1.8fuhvyy9scidk9fzmmme9k14l
[2016-09-13 18:42:37,811][INFO ][cluster.service          ] [c251c9e143c1] added {{223fe8d80047}{f7GkmloETKe4UZsJy0nOsg}{fO6738D4T-ORB6GFFSSbsw}{10.0.1.9}{10.0.1.9:9300},}, reason: zen-disco-node-join[{223fe8d80047}{f7GkmloETKe4UZsJy0nOsg}{fO6738D4T-ORB6GFFSSbsw}{10.0.1.9}{10.0.1.9:9300}]
graphically

Remember that es_ingest service where tagged to node with type es_ingest, so it was started on node worker4.
Here an example of scaling data node to three:
mochoa@localhost:~/es$ docker service scale es_data=3
es_data scaled to 3
mochoa@localhost:~/es$ docker logs -f es_master.1.8fuhvyy9scidk9fzmmme9k14l
[2016-09-13 18:48:57,919][INFO ][cluster.service          ] [c251c9e143c1] added {{624b3fdc746f}{fOJW3Ms1RdiYeTuS4jtdnQ}{208lYQhoRj2LIDspst2Njg}{10.0.1.10}{10.0.1.10:9300},}, reason: zen-disco-node-join[{624b3fdc746f}{fOJW3Ms1RdiYeTuS4jtdnQ}{208lYQhoRj2LIDspst2Njg}{10.0.1.10}{10.0.1.10:9300}]
graphically:
latest es_data ElasticSearch data node was deployed into the empty cluster node worker3.
Finally we can stop our ElasticSearch cluster node using:
mochoa@localhost:~/es$ docker service rm es_ingest es_data es_master
es_ingest
es_data
es_master
Final toughs, remember that Docker Swarm services are ephemeral do not store persistent data on it so what happen with my data? Well if your DataCenter infra-structure is large you could up and down nodes using scale command and ElasticSearch will move your data using replicas and the cluster algorithms or by using Docker storage drivers but is not clearly for me how to map each storage node to a particular persistent directory.
All script about this post is available at my GitHub repo.

EBS 12.2 August 2016 Technology Stack RPC Now Available

Steven Chan - Tue, 2016-09-13 14:26

The latest cumulative set of updates to the E-Business Suite 12.2 technology stack foundation utilities is now available in a new August 2016 Recommended Patch Collection (RPC):

Oracle strongly recommends that all E-Business Suite 12.2 users apply this set of updates.

What issues are fixed in this patch?

This cumulative Recommended Patch Collection contains important fixes for issues with the Oracle EBS Application Object Library (FND) libraries that handle password hashing and resets, Forms-related interactions, key flexfields, descriptive flexfields, and more.  Specific issues fixed include:18071903 - Post Mixed Case Pswrd  on Db and Clone on Hashed Apps Can't Change Applsys Pswrd

  • 18083491 - Password Reset for Existing User Is Not Working
  • 18137744 - FNDCPASS Not Changing Password on Cloned Ebs R12.2.3
  • 18383570 - FNDCPASS Not Changing Password After Upgrade to 12.2.3
  • 19259764 - Error when Opening Forms in IE8 on Multi-Node Ebs 12.2.3
  • 19891697 - Performance Problems with Results Set Cache
  • 19899452 - Getting Accounts Payable Descriptive Flexfield Error - Maximum Value Size for Segment Is Truncated
  • 20537212 - Values in Item Codes Are Not Visible when Applying Key Flexfield Security Rules
  • 20814982 - Defaulting Descriptive Flexfield Segment Behavior Is Different from 11I
  • 21612876 - Cross Validation  Performance Issues
  • 22550312 - Over 2300 Contexts Defined Causes FNDFFVGN Signal 11
  • 23115501 - App-Fnd-01023 "The Following Required Field Does Not Have a Value"
  • 23586683 - CCID Not Saved when Account Segments Are Changed Using the Accounting Flexfield

Related Articles


Categories: APPS Blogs

#OOW16 - Why You Should Attend: Content Management in the Cloud: Strategy & Roadmap (CON7527)?

WebCenter Team - Tue, 2016-09-13 10:54
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

In a world where work is no longer dictated by location and spans not just geography but our ecosystem – employees, partners, suppliers and customers, as well as channels – web, mobile, social, how do you drive effective work collaboration and rapid decision making? How do you optimize the complete content management lifecycle – from collaboration around content creation especially on mobile, content upkeep to omni-channel content delivery and drive analytics around content usage and interaction?

If you are heading to Oracle OpenWorld in another 2 weeks, don’t miss the executive DX session that discusses Oracle's Content and Experience Management strategy and vision, as well as:

Content Management in the Cloud: Strategy and Roadmap [CON7257]
Monday, September 19, 12:30 pm
Moscone West—2014

You will hear about Oracle’s strategy, roadmap and vision on how to drive an integrated content management experience in the Cloud. In an always-on world, learn how you can stay connected and collaborate real time, even on mobile devices. Find out how mobile collaboration should go beyond file sync and share and drive social-based engagement. You will hear about a content strategy that doesn’t create content silos but bridges the gap between your existing content management system and your ecosystem.

Previous generations of content management were tightly coupled to a specific channel. For instance, Web Content Management Systems were meant only to support the Web. Learn how Oracle’s flagship cloud solution for content and social collaboration, Oracle Documents Cloud Service is providing you a single platform to drive content across the different channels and in context of your applications. Hear first-hand about the product innovations being delivered to simplify work and decision making processes, drive social and mobile engagement, and deliver application extensions. Organizations using Oracle Marketing Cloud, Oracle Service Cloud, Oracle Sales Cloud or those using on-premises applications like Oracle JD Edwards and Siebel would benefit from learning about the integration use case between the applications and Oracle Documents Cloud Service.

And, don’t miss the live mobile demos of the solution that showcase real-life use cases to leverage in your organization. Product experts will be at hand to answer your questions and discuss best practices.

And most importantly, join this session to hear from one of your peers, Edi Nilson Piovezani, Director Infrastructure, from Omni Financiera in Brazil who would be sharing his content management journey to the Cloud. Omni uses Oracle Documents Cloud Services to store customer documents used in the credit request process, managed by WebCenter Content. A customized portal allows the 10,000 credit agents easily submit the documents for credit analysis. Existing WebCenter and on-premise content management system stakeholders may particularly find this use case insightful.

Add to your My Schedule today:

Content Management in the Cloud: Strategy and Roadmap [CON7257]

Monday, September 19, 12:30 pm
Moscone West—2014

For a complete listing of all things Digital Experience don’t miss our recently launched website, Content and Experience at Oracle Open World 2016.

Psst: Attend this and the key DX, Mobile sessions and it may be a "rewarding" experience. Check out the details under the "mobile contest" section on our site: https://sites.oracle.com/oowdx

Btw, the site is built on Oracle Sites Cloud Service with Oracle Documents Cloud Service as its underlying content repository!

And join the conversation on twitter using #OOW16 and #OracleDX. You may find your tweet show up on the site too…

See you at OOW16 then!

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Calculate compound interest using SQL

Tom Kyte - Tue, 2016-09-13 06:26
Hi, I am trying to create a logic. Please help me out. Suppose i have brought a fund in Jan 2015 which is going to mature in year 2018.I am receiving the interest Semi-Annually. I want to check the interest received From OCT 16 to SEP 17. Bas...
Categories: DBA Blogs

Execute Immediate with forall

Tom Kyte - Tue, 2016-09-13 06:26
Hi Tom, I've below code where i need to update different table with status to null. declare type v_table_name is table of varchar2(100); p_tab v_table_name; pv_tab v_table_name; v_sql varchar2(100); v_var varchar2(100); ...
Categories: DBA Blogs

Standby from hot backup and 2 identical files

Tom Kyte - Tue, 2016-09-13 06:26
Hi, friends! We have a primary server, standby and the second standby. The 2nd stndby is a new server where i want to migrate database. I made a hot backup. 2nd stndby have a 1 directory for datafiles, but primary have 3 directories. In init file...
Categories: DBA Blogs

Clob size

Tom Kyte - Tue, 2016-09-13 06:26
Hi We have table t and it has clob segment c . We can measure the size of the clob segment from dba_segments. Say on day 1 : size is 0 gb On day 2: size is 4 gb on day 3 : we have pre allocated the extents of size 100gb on day 4: we ...
Categories: DBA Blogs

Does the Data Guard Broker populate and manage the FAL_SERVER and FAL_CLIENT parameters?

Tom Kyte - Tue, 2016-09-13 06:26
I found in the 9.2 documentation that these two parameters were removed but everywhere I look it seems people are populating them manually still and I see no mention of them being managed by the broker in 11 and 12 documentation. Are these still mana...
Categories: DBA Blogs

Insert Direct path load

Tom Kyte - Tue, 2016-09-13 06:26
Hi tom, While inserting record in a table when we use append hints, it going for direct path load but when use parallel hints it is going for conventional path load.Is it correct? While insert use should always use Append hints not parallel hi...
Categories: DBA Blogs

Result cache side effects on number of calls

Yann Neuhaus - Tue, 2016-09-13 04:38

During the execution of a SQL statement, you cannot guess how many times an operation, a predicate, or a function will be executed. This depends on the execution plan, on some caching at execution, and some other execution time decisions. Here is an example where result cache may bring some overhead by calling a function multiple times.

Here is my function:
SQL> create or replace function F return number is
2 begin
3 dbms_lock.sleep(5);
4 dbms_output.put_line('Hello World');
5 return 255;
6 end;
7 /
Function created.

The function displays ‘Hello World’ so that I can check how many times it is executed (I’ve set serveroutput on).

Obviously, on a one row table, it is called only once:
SQL> select f from dual;
 
F
----------
255
 
Hello World

Query result cache miss

I’ll run now the same query but with the result cache hint. The first execution will have to execute the query because the cache is empty at that point:

SQL> exec dbms_result_cache.flush;
PL/SQL procedure successfully completed.
 
SQL> select /*+ result_cache */ f from dual;
 
F
----------
255
 
Hello World
Hello World

Here is what I wanted to show: ‘Hello World’ is displayed two times instead of one. If your function is an expensive one, then the first execution, or every cache miss, will have a performance overhead.

Query result cache hit

Now that the result is in the cache:

SQL> select id, type, status, name from v$result_cache_objects;
 
ID TYPE STATUS NAME
---------- ---------- --------- ------------------------------------------------------------
33 Dependency Published DEMO.F
34 Result Published select /*+ result_cache */ f from dual

and the table has not changed (it’s DUAL here :; ) further executions do not call the function anymore, which is the expected result.

SQL> select /*+ result_cache */ f from dual ;
 
F
----------
255

Bug or not?

Bug 21484570 has been opened for that and closed as ‘Not a bug’. There is no guarantee that the function is evaluated once, twice, more or never.
Ok, why not. That’s an implementation decision. Just think about it if you want to workaround an expensive function called for each row, then query result cache may not be the right solution (except if all tables are static and you always have cache hits).

Note that if the function is declared as deterministic, it is executed only once.

You can workaround the issue by using result cache at function level (in place, or in addition to query result cache if you need it).

SQL> create or replace function F return number RESULT_CACHE is
2 begin
3 dbms_lock.sleep(5);
4 dbms_output.put_line('Hello World');
5 return 255;
6 end;
7 /
Function created.
 
SQL> select /*+ result_cache */ f from dual;
 
F
----------
255
 
Hello World
 
SQL> select id, type, status, name from v$result_cache_objects;
 
ID TYPE STATUS NAME
---------- ---------- --------- ------------------------------------------------------------
64 Dependency Published DEMO.F
66 Result Published "DEMO"."F"::8."F"#e17d780a3c3eae3d #1
65 Result Published select /*+ result_cache */ f from dual

So, not a big problem. Just something to know. And anyway, the right design is NOT to call a function for each row because it’s not scalable. Pipeline functions should be used for that.

 

Cet article Result cache side effects on number of calls est apparu en premier sur Blog dbi services.

Securefile space

Jonathan Lewis - Tue, 2016-09-13 01:29

A script hacked together a couple of years ago from a clone of a script I’d been using for checking space usage in the older types of segments. Oracle Corp. eventually put together a routine to peer inside securefile LOBs:

rem
rem     Script:         dbms_space_use_sf.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2013
rem     Purpose:
rem
rem     Last tested
rem             12.1.0.1
rem             11.2.0.4
rem     Not tested
rem             11.1.0.7
rem     Not relevant
rem             10.2.0.5
rem              9.2.0.8
rem              8.1.7.4
rem
rem     Notes:
rem     See also dbms_space_use.sql
rem
rem     11g introduced securefiles lobs and two overloads of
rem     dbms_space_usage to report space used by their segments
rem
rem     Valid values for suoption are:
rem             SPACEUSAGE_EXACT (16): Computes space usage exhaustively
rem             SPACEUSAGE_FAST  (17): Retrieves values from in-memory statistics
rem


define m_seg_owner      = &1
define m_seg_name       = &2
define m_seg_type       = &3

define m_segment_owner  = &m_seg_owner
define m_segment_name   = &m_seg_name
define m_segment_type   = &m_seg_type

spool dbms_space_use_sf

prompt  ============
prompt  Secure files
prompt  ============

declare
        wrong_ssm       exception;
        pragma exception_init(wrong_ssm, -10614);

        m_segment_size_blocks   number(12,0);
        m_segment_size_bytes    number(12,0);
        m_used_blocks           number(12,0);
        m_used_bytes            number(12,0);
        m_expired_blocks        number(12,0);
        m_expired_bytes         number(12,0);
        m_unexpired_blocks      number(12,0);
        m_unexpired_bytes       number(12,0);

begin
        dbms_space.space_usage(
                upper('&m_segment_owner'),
                upper('&m_segment_name'),
                upper('&m_segment_type'),
--              PARTITION_NAME          => null,
                suoption                => dbms_space.spaceusage_exact,
--              suoption                => dbms_space.spaceusage_fast,
                segment_size_blocks     => m_segment_size_blocks,
                segment_size_bytes      => m_segment_size_bytes,
                used_blocks             => m_used_blocks,
                used_bytes              => m_used_bytes,
                expired_blocks          => m_expired_blocks,
                expired_bytes           => m_expired_bytes,
                unexpired_blocks        => m_unexpired_blocks,
                unexpired_bytes         => m_unexpired_bytes
        );

        dbms_output.new_line;
        dbms_output.put_line(' Segment Blocks:    '|| m_segment_size_blocks || ' Bytes: '|| m_segment_size_bytes);
        dbms_output.put_line(' Used Blocks:       '|| m_used_blocks         || ' Bytes: '|| m_used_bytes);
        dbms_output.put_line(' Expired Blocks     '|| m_expired_blocks      || ' Bytes: '|| m_expired_bytes);
        dbms_output.put_line(' Unexpired Blocks   '|| m_unexpired_blocks    || ' Bytes: '|| m_unexpired_bytes);

exception
        when wrong_ssm then
                dbms_output.put_line('Segment not ASSM');
end;
/

prompt  ===============
prompt  Generic details
prompt  ===============

declare
        m_TOTAL_BLOCKS                  number;
        m_TOTAL_BYTES                   number;
        m_UNUSED_BLOCKS                 number;
        m_UNUSED_BYTES                  number;
        m_LAST_USED_EXTENT_FILE_ID      number;
        m_LAST_USED_EXTENT_BLOCK_ID     number;
        m_LAST_USED_BLOCK               number;
begin
        dbms_space.UNUSED_SPACE(
                upper('&m_segment_owner'),
                upper('&m_segment_name'),
                upper('&m_segment_type'),
--              PARTITION_NAME                  => null,
                m_TOTAL_BLOCKS,
                m_TOTAL_BYTES,
                m_UNUSED_BLOCKS,
                m_UNUSED_BYTES,
                m_LAST_USED_EXTENT_FILE_ID,
                m_LAST_USED_EXTENT_BLOCK_ID,
                m_LAST_USED_BLOCK
        );

        dbms_output.put_line('Segment Total blocks: '  || m_total_blocks);
        dbms_output.put_line('Object Unused blocks: '  || m_unused_blocks);

end;
/

spool off

Sample of output:


============
Secure files
============

 Segment Blocks:    168960 Bytes: 1384120320
 Used Blocks:       151165 Bytes: 1238343680
 Expired Blocks     17795 Bytes: 145776640
 Unexpired Blocks   0 Bytes: 0

PL/SQL procedure successfully completed.

===============
Generic details
===============
Segment Total blocks: 168960
Object Unused blocks: 0

PL/SQL procedure successfully completed.



#OOW16: Focus on Content and Experience Management & Mobile

WebCenter Team - Mon, 2016-09-12 15:34

Content and Experience Management & Mobile at #OOW16
Each year at Oracle OpenWorld, we drive the best and most innovative sessions for our attendees. This year is no different. With over 25 speaking sessions reflecting strategy, vision and perspectives from our executives, technology experts, customers, partners, user groups and industry thought leaders; 5 live product demos; and 4 hands-on-labs devoted to Oracle WebCenter, Digital Experience, Mobile and our Content and Experience Management solutions, we are excited to showcase our product innovations, customer successes, and strategy and vision at OOW 2016! 

NOW LIVE! Oracle OpenWorld Content and Experience Management Website
So consider this…no more fumbling through printed materials, no more trying to search to find when or where the next session is or where the demogrounds are located. At this year’s Oracle OpenWorld, your customers and partners will have a much more simplified and engaging experience, from any device – web, phone, tablet! The Oracle OpenWorld Content and Experience Management website is the ONLY link you will need to keep tabs on all things Digital Experience at Oracle OpenWorld 2016. You will find information on the upcoming sessions, the day’s agenda, ready information on Hands-on-Labs, Demo locations and hours, and more. Live twitter feed and pictures will allow you to experience the conference in real time. The site is mobile-first so you can enjoy a rich, dynamic mobile experience. They say…you have to experience it to believe it, and @ #OOW16 your customers and partners will get their chance! Please bookmark this website to stay in the know of all things #OracleDX at #OOW16. https://sites.oracle.com/oowdx


This Year’s OpenWorld Highlights
  • Over 25 Content and Experience Management & Mobile sessions featuring customers and partners like Cox Communications, EMO Transportation, McKesson, OMNI, PricewaterhouseCoopers LLP, Fishbowl Solutions, Hellosign, IntraSee, TekStream Solutions, Facebook, National Pharmacies, Sofbang, Samsung, and more!  
  • A Meet the Experts session on Tuesday, Sep 20, from  6:15 p.m. - 7:00 p.m. at Moscone South—302 [MTE7085]. This is your chance to meet the gurus from the Content and Experience Management team as they discuss how to “Transform Businesses with Content and Experience Management”.
  • 5 live product demo stations; 4 hands-on labs sessions, including Cloud labs focusing on Content and Experience Management solutions, highlighting Sites Cloud Service, Documents Cloud Service, Process Cloud Service and Mobile Cloud Service.
  • Innovation Awards ceremony with winners and representation from our Content and Experience Management solutions. Tuesday 9/20 | 4:00pm – 6:00pm | YBCA Theater | Event Contact | OOW Registration Required.
  • OpenWorld Content and Experience Management CVC available for account teams and customers. Contact Ellen Gravina to setup customer meetings with Product Management.
  • NEW! Oracle OpenWorld Content and Experience Management website and Mobile Challenge App Game.
Must Attend Sessions
Don’t miss the Strategy and Vision sessions for the overall Content and Experience Management & Mobile portfolios, as well as each of the respective solution and cloud technologies. These not-to-be-missed sessions can help organizations plan their roadmaps. Attendees will also get an opportunity to hear from customer organizations on implementation successes.

Monday, Sep 19
  • Content and Experience Management: Roadmap and Vision  [CON7256]
    • 11:00 a.m. – 11:45 a.m. | Moscone West—2014
    • David Le Strat, Sr. Director, Product Management, Oracle
    • Why should you attend? In this session, you will hear Oracle executive, David Le Strat discuss the key focus areas of investment and updates on the Oracle Cloud portfolio for Cloud and Experience Management, and how existing Oracle WebCenter customers can leverage the benefits of Cloud.  You will also get to see the solutions in action and explore use cases that are driving digital experiences in our customer organizations today.
  • Mobile Roadmap and Vision: Mobile Now and in the Future: Location, Cognitive Insights, Chatbots & Much More [CON7881]
    • 11:00 a.m. – 11:45 a.m. | Moscone West—2016
    • Suhas Uliyar, Vice President, Mobile Strategy and Product Management, Oracle 
    • Roger Westerbeek, Manchester Airports Group
  • Content Management in the Cloud: Strategy & Roadmap [CON7257]
    • 12:30 p.m. – 1:15 p.m. | Moscone West—2014
  • Build Better Apps Faster with Oracle E-Business Suite with Oracle Mobile [CON6129]
    • 12:30 p.m. – 1:15 p.m. | Moscone West—2016
  • Digital Experience in the Cloud: Strategy & Roadmap [CON7258] 
    • 1:45 p.m. – 2:30 p.m. | Moscone West—3000
Tuesday, Sep 20
  • Gain Deep Insights into User Activity Across Web and Mobile to Engage Meaningfully [CON6735]
    • 12:15 p.m. – 1:00 p.m. | Moscone West—2016
    • Anush Kumar, Director, Product Strategy, Oracle
    • Shailendra Mishra, VP, Development, Oracle
    • Ryan Klose, General Manager, National Pharmacies Group
    • Why should you attend? Ryan Klose, General Manager of Australia’s National Pharmacies Group will join Anush Kumar and Shailendra Mishra of Oracle to highlight Oracle's new Customer Insights & Engagement Cloud Service. If you’re looking for a big data platform that provides marketers and mobile decision-makers user behavior metrics through application engagement, conversion, location, and retention reports, in addition to predicting user churn, this is a session, and a cloud service solution, you don’t want to miss!
  • WebCenter Content, Imaging, Capture & Forms Recognition: Roadmap & Strategy [CON7259]
    • 4:00 p.m. – 4:45 p.m. | Moscone West—2014
  • Learn from Oracle Mobile Customers: Going from Strategy to Live in Weeks [CON 6379]
    • 4:00 p.m. – 4:45 p.m. | Moscone West—2016
    • Joe Huang, Product Management, Oracle Mobile Cloud Platform, Oracle
    • M.A. Haseeb, EVP/SVP/VP, Tetra Tech Inc.
    • Serdar Yorgancigil, Director, IT, AAR Corp
    • Fulvio Manente, Head of IT, Estapar
    • Bala Venkataraman, Martin Marietta
    • Why should you attend? You’ve already seen above who some of our customers are… now come hear how they formulated their mobile strategies, created compelling mobile apps, and achieved positive ROI using the Oracle Mobile Cloud Platform!
  • Oracle WebCenter Digital Experience: Sites & Portal Strategy and Roadmap [CON7260]
    • 5:15 p.m. – 6:00 p.m. | Moscone West—2014
Wednesday, Sep 21
  • Cox Enterprises Reimagines the Digital Workplace with Oracle WebCenter [CAS4789] 
    • 3:00 p.m. – 3:45 p.m. | Marriott Marquis—Golden Gate C3
There are PaaS General Sessions and many more sessions highlighting customer successes, product deep dives, partner discussions, persona and/or industry based discussions, Cloud/PaaS lessons, live product demonstrations and Hands-On-Labs (HOL) sessions so do bookmark the following links for a complete, up to date listing: 
My Schedule is now live for Oracle OpenWorld! Customer and partner attendees can use My Schedule to plan and optimize their time during the conferences by building personalized conference schedules beforehand. We recommend adding the sessions mentioned above and the others of interest from the Focus On documents listed above. 

Content and Experience Management Auxiliary Events at OOW
In addition to regularly scheduled programs of sessions, hands-on labs and demos, we have planned additional events for our customers and partners to actively engage in product roadmap feedbacks and network with their peers. These auxiliary events include:
  • Innovation Awards Ceremony | Tuesday 9/20 | 4:00pm – 6:00pm | YBCA Theater | Event Contact | OOW Registration Required
  • Oracle Appreciation Event at OOW16 with Billy Joel | Wednesday 9/21 | 7:00pm – 11:00pm | Event Information
NOW AVAILABLE! Mobile Challenge App Game
The #OOW16 Mobile Experience will include an interactive, Pokémon Go-styled, mobile app that was built using Oracle Mobile Cloud Service, through which you can win prizes. You can find out more information in this blog post! Are you ready for the Mobile Challenge? Download the App at Google Play or Apple App Store today!

Social Media Communications
We will be highlighting our key sessions and other important information on the Oracle DX blog periodically. In addition, please use the following hashtags to discuss OOW on your respective channels!

  • #OOW16 | #OracleDX | #OracleMCS
And we are asking you to please follow along and join the conversations on our social media channels.
We are looking forward to a successful #OOW16!

Oracle Service Secrets: Migrate Transparently

Pythian Group - Mon, 2016-09-12 15:02

Databases or schemas tend to get moved around between different servers or even datacenters for hardware upgrades, consolidations or other migrations. And while the work that needs to be done is pretty straight forward for DBAs, I find the most annoying aspect of that is updating all client connect strings and tns entries used with new IP addresses and – if not using services – also the SID as the instance name might have changed.

That process can be simplified a lot when following a simple good practice of creating an extra service for each application or schema and along with that service also a DNS name for that IP. With that in place, a database can be migrated without the need to touch client connection parameters or tns aliases. All that is needed will be to migrate the database or schema, create the service name on the new instance and update the DNS record to the new machine.

Demo

Here is an example. I am migrating a schema from an 11g single instance on my laptop to a RAC database in the oracle public cloud. I am connecting to that database with blog_demo.pythian.com both as the hostname (faked through /etc/hosts instead of proper DNS for this demo) and the service name. As an application I am connecting to the database with sqlcl and a static connection string. Just remember that the whole, and only point of this demo is to migrate the schema without having to change that connect string.

brost$ ping -c 1 blog_demo.pythian.com
PING blog_demo.pythian.com (192.168.78.101): 56 data bytes
64 bytes from 192.168.78.101: icmp_seq=0 ttl=64 time=0.790 ms

brost$ ./sqlcl/bin/sql brost/******@blog_demo.pythian.com/blog_demo.pythian.com

SQLcl: Release 4.2.0.16.175.1027 RC on Mon Sep 05 17:50:11 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


SQL> select instance_name from v$instance;

INSTANCE_NAME   
----------------
ORCL        

Next I migrated my schema with datapump and imported to a PDB running on a 12c RAC database.

Then added the service name BLOG_DEMO to PDB1 on the database ORCL42.

$ srvctl add service -db orcl42 -pdb pdb1 -service blog_demo -preferred orcl421,orcl422
$ srvctl start service -db orcl42 -service blog_demo

Updated the DNS or as in this simplified demo my /etc/hosts and now I can connect with the same connection string. Note that the IP, the instance_name and the version have changed without the need to modify the connection string.

brost$ ping -c 1 blog_demo.pythian.com
PING blog_demo.pythian.com (140.86.42.42): 56 data bytes

brost$ ./sqlcl/bin/sql brost/******@blog_demo.pythian.com/blog_demo.pythian.com

SQLcl: Release 4.2.0.16.175.1027 RC on Mon Sep 05 18:05:11 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Mon Sep 05 2016 18:04:50 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select instance_name from v$instance;
INSTANCE_NAME   
----------------
orcl421

Note that with a proper DNS and a RAC target you would want to create A-records for the 3 SCAN IPs.

Other posts in this series

You can watch me talk briefly about this and other things that you can do with properly configured services in the video below or follow the links to other parts in this series.

tl;dr

When creating services for your applications to connect to a schema, also create a DNS entry for that and use this DNS name and the service for all client and application connections instead of using the hostname and SID. This might initially look like overhead but allows for flexibility when migrating schemas or databases to other systems. Updating DNS and creating a new service on the target machine can be changed in central places and saves updating potentially hundreds of client connect strings or tnsnames across the enterprise.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator