Feed aggregator

Extract Logical Operators (AND/OR) from String

Tom Kyte - Tue, 2016-10-18 09:06
I want to help to extract Logical Operators (AND/OR) from string having n numbers of operators For Example: ((1=1 AND 1=1) OR 1=1) AND (1=1 AND 1=1)
Categories: DBA Blogs

MODEL CLAUSE

Tom Kyte - Tue, 2016-10-18 09:06
Hi? this my query I have a dictionary table create table D_CONFIG_TST ( row_name VARCHAR2(1500), -- text field with name of row row_number VARCHAR2(50), -- text field with row number for order row_array VARCHAR2(15...
Categories: DBA Blogs

Change Data Capture to get modified records of a table

Tom Kyte - Tue, 2016-10-18 09:06
Hello Tom, Hope you are doing good! Request your suggestions for the following scenario. We have a requirement wherein we have to migrate modified/delta data on a table(Modified by DML statements) from Oracle to MongoDb. To do this i have su...
Categories: DBA Blogs

DBMS_OUTPUT to Query output

Tom Kyte - Tue, 2016-10-18 09:06
Hello Tom, First thanks for helping me out. Few days ago you explained how to insert dbms_output into temporary tables or nested table. This was the example: This example works perfectly if I have write permission in the database, Is there a wa...
Categories: DBA Blogs

The output file alignment must be adjusted.

Tom Kyte - Tue, 2016-10-18 09:06
Hi Team, I executed shell script file. The shell script file has the below sql information in it. SET SERVEROUTPUT ON; SET LINESIZE 4600; #SET TRIMSPOOL ON; SET WRAP OFF; SET HEADING OFF; #SET TRIMOUT ON; SET TIMING ON; SET FEEDBA...
Categories: DBA Blogs

"Error occurred : ORA-01555: snapshot too old: rollback segment number 313 with name "_SYSSMU313_2192191193$" too small"

Tom Kyte - Tue, 2016-10-18 09:06
Hi Tom, I'm posting a email conversation between a DBA and a Technical lead in our organization. Is the answer you gave 15 years back on "ORA-01555" is still valid for the new oracle releases. <b>DBA --</b> <i>"This is due to long running qu...
Categories: DBA Blogs

Weblogic 11g to 12c: strictness in listen address

Darwin IT - Tue, 2016-10-18 07:50
Let's say you have a virtual machine with two network adapters, both set on 'HostOnly'.
I used to do that and set the first one of those to a fixed IP address, say 10.0.0.1. To this one I coupled the hostname, for instance darlin-vce-db, using the /etc/hosts file. That way I had a fixed, always existing network address for the database.

Together with the database, you install WebLogic, for instance to serve SOASuite, or OSB, or whatever custom application you want to serve. Now, wouldn't it be nice to be able to use WebLogic from a browser out of the virtual machine? Of course, because this is what you do nowadays: almost everywhere I come, servers are hosted on virtualized computing environments, like VMWare VSE or Oracle VM. So that's where the second adapter comes in, dynamically coupled to an address in the form of 192.168.56.101, for instance. Externally, using the etc/host file on your host OS (in my case Windows), you couple that address also to darlin-vce-db.

So you have two /etc/hosts settings for the hostname, darlin-vce-db:
In internally in the VM:
10.0.0.1        darlin-vce-db     darlin-vce-db.darwin-it.local
And externally on your host OS:
192.168.56.101  darlin-vce-db     darlin-vce-db.darwin-it.local

Nothing special, right? Well, WebLogic 11g, apparently just listens to the hostname 'darlin-vce-db', if that is entered as a listen-address. It seems not to care if the request for 'darlin-vce-db' comes from 192.168.56.101 in stead of the 10.0.0.1 to where the hostname actually is bound.

However, in this particular case WebLogic 12c seems to behave differently. If you provice 'darlin-vce-db' as listen address, that is bound to a network adapter that has 10.0.0.1 it expects that requests also come in via that network adapter. It seems to ignore requests that come in via other adapters (in my case 192.168.56.101).

You can solve this partially using Channels: in the Weblogic Console, navigate to the particular managed server, click Protocols, Channels.
Create a new channel:

Give it a name like 'Extern-Intern' or something else properly denoting the purpose of it and choose 'http' as a protocol:

Then Provide the internal address, for instance 'darlin-vce-db', and the external listen address:

Leave the ports to the default listen-port, in this case. Then finish the wizard.
Although this helps in connecting to the WebLogic console, EM, or with the same method on the SOA Server, to the SOA Composer (soaserver:port/soa/composer), BPM Workspace (soaserver:port/bpm/composer) etc., this will not work for JDeveloper.

When trying to deploy a soa composite from JDeveloper, you define/choose a ApplicationServer with connection to the AdminServer. But in case of deploying a composite, the AdminServer figures out which running SOA Servers there are, and let JDeveloper provide the composite to those servers. But then the soaserver(s) refuse(s) the connections from JDeveloper. Testing the ApplicationServer connection will show success for the Http connection to the AdminServer, but fails for all the other components.

The solution is then to denote a particular Network Adapter/ip address and make sure that internally and externally the particular hostname is coupled to that same particular ip-address.

Oracle Enterprise Manager Cloud Control 13c Release 2 (13cR2) Installation/Upgrade

Tim Hall - Tue, 2016-10-18 02:51

em13cOracle Enterprise Manager Cloud Control 13c Release 2 (13cR2) was released a couple of weeks ago. In a previous post I mentioned we were going to stop our rollout of 13cR1 agents to production and upgrade from 13cR1 to 13cR2 before we resumed.

I don’t like doing anything at work that I haven’t already tried at home, so the first step in that process was for me to do some clean installs and practice upgrades. After a busy weekend and a late night last night I think I’m happy with the process. That resulted in these articles.

If you’ve done a 13cR1 installation, the clean 13cR2 installation will come as no surprise. They now have multitenant and non-CDB repository templates to choose from. I used the multitenant template in this example. The installation was fine on both OL6 and OL7, so nothing out of the ordinary to report there.

The upgrade process was similar to previous point release upgrades too. We used the non-CDB template, the only one available at the time, to build our 13cR1 installation, so not surprisingly I practised the upgrade using that as a starting point. The upgrade process went fine, but I got a lot of warning messages during the process. It was all working fine at the end though…

I guess we will start rolling this bad-boy out once I get back from the APAC Tour and Bulgaria (BGOUG).

Cheers

Tim…

Oracle Enterprise Manager Cloud Control 13c Release 2 (13cR2) Installation/Upgrade was first posted on October 18, 2016 at 8:51 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Documentum story – Change the location of the Xhive Database for the DSearch (xPlore)

Yann Neuhaus - Tue, 2016-10-18 02:00

When using xPlore with Documentum, you will need to setup a DSearch which will be used to perform the searches and this DSearch uses in the background an Xhive Database. This is a native XML Database that persists XML DOMs and provide access to them using XPath and XQuery. In this blog, I will share the steps needed to change the location of the Xhive Database used by the DSearch. You usually don’t want to move this XML Database everyday but it might be useful as a one-time action. In this customer case, one of the DSearch in a Sandbox/Dev environment has been installed using a wrong path for the Xhive Database (not following our installation conventions) and therefore we had to correct that just to keep the alignment between all environments and to avoid a complete uninstall/reinstall of the IndexAgents + DSearch.

 

In the steps below, I will suppose that xPlore has been installed under “/app/xPlore” and that the Xhive Database has been created under “/app/xPlore/data”. This is the default value and then when installing an IndexAgent, it will create, under the data folder, a sub-folder with a name equal to the DSearch Domain’s name (usually the name of the docbase/repository). In this blog I will show you how to move this Xhive Database to “/app/xPlore/test-data” without having to reinstall everything. This means that the Xhive Database will NOT be deleted/recreated from scratch (this is also possible) and therefore you will NOT have to perform a full reindex which would have taken a looong time.

 

So let’s start with stopping all components first:

[xplore@xplore_server_01 ~]$ sh -c "/app/xPlore/jboss7.1.1/server/stopIndexagent.sh"
[xplore@xplore_server_01 ~]$ sh -c "/app/xPlore/jboss7.1.1/server/stopPrimaryDsearch.sh"

 

Once this is done, we need to backup the data and config files, just in case…

[xplore@xplore_server_01 ~]$ current_date=$(date "+%Y%m%d")
[xplore@xplore_server_01 ~]$ cp -R /app/xPlore/data/ /app/xPlore/data_bck_$current_date
[xplore@xplore_server_01 ~]$ cp -R /app/xPlore/config/ /app/xPlore/config_bck_$current_date
[xplore@xplore_server_01 ~]$ mv /app/xPlore/data/ /app/xPlore/test-data/

 

Ok now everything in the background is prepared and we can start the actual steps to move the Xhive Database. The first step is to change the data location in the files stored in the config folder. There is actually two files that need to be updated: indexserverconfig.xml and XhiveDatabase.bootstrap. In the first file, you need to update the “storage-location” path that defines where the data are kept and in the second file you need to update all paths pointing to the Database files. Here are some simple commands to replace the old path with the new path and check that it has been done properly:

[xplore@xplore_server_01 ~]$ sed -i "s,/app/xPlore/data,/app/xPlore/test-data," /app/xPlore/config/indexserverconfig.xml
[xplore@xplore_server_01 ~]$ sed -i "s,/app/xPlore/data,/app/xPlore/test-data," /app/xPlore/config/XhiveDatabase.bootstrap
[xplore@xplore_server_01 ~]$ 
[xplore@xplore_server_01 ~]$ grep -A2 "<storage-locations>" /app/xPlore/config/indexserverconfig.xml
    <storage-locations>
        <storage-location path="/app/xPlore/test-data" quota_in_MB="10" status="not_full" name="default"/>
    </storage-locations>
[xplore@xplore_server_01 ~]$ 
[xplore@xplore_server_01 ~]$ grep "/app/xPlore/test-data" /app/xPlore/config/XhiveDatabase.bootstrap | grep 'id="[0-4]"'
        <file path="/app/xPlore/test-data/xhivedb-default-0.XhiveDatabase.DB" id="0"/>
        <file path="/app/xPlore/test-data/SystemData/xhivedb-SystemData-0.XhiveDatabase.DB" id="2"/>
        <file path="/app/xPlore/test-data/SystemData/MetricsDB/xhivedb-SystemData#MetricsDB-0.XhiveDatabase.DB" id="3"/>
        <file path="/app/xPlore/test-data/SystemData/MetricsDB/PrimaryDsearch/xhivedb-SystemData#MetricsDB#PrimaryDsearch-0.XhiveDatabase.DB" id="4"/>
        <file path="/app/xPlore/test-data/xhivedb-temporary-0.XhiveDatabase.DB" id="1"/>

 

The next step is to announce the new location of the data folder to the DSearch so it can create future Xhive Databases at the right location and this is done inside the file indexserver-bootstrap.properties. After the update, this file should look like the following:

[xplore@xplore_server_01 ~]$ cat /app/xPlore/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/indexserver-bootstrap.properties
# (c) 1994-2009, EMC Corporation. All Rights Reserved.
#Wed May 20 10:40:49 PDT 2009
#Note: Do not change the values of the properties in this file except xhive-pagesize and force-restart-xdb.
node-name=PrimaryDsearch
configuration-service-class=com.emc.documentum.core.fulltext.indexserver.core.config.impl.xmlfile.IndexServerConfig
indexserver.config.file=/app/xPlore/config/indexserverconfig.xml
xhive-database-name=xhivedb
superuser-name=superuser
superuser-password=****************************************************
adminuser-name=Administrator
adminuser-password=****************************************************
xhive-bootstrapfile-name=/app/xPlore/config/XhiveDatabase.bootstrap
xhive-connection-string=xhive://xplore_server_01:9330
xhive-pagesize=4096
# xhive-cache-pages=40960
isPrimary = true
licensekey=**************************************************************
xhive-data-directory=/app/xPlore/test-data
xhive-log-directory=

 

In this file:

  • indexserver.config.file => defines the location of the indexserverconfig.xml file that must be used to recreate the DSearch Xhive Database.
  • xhive-bootstrapfile-name => defines the location and name of the Xhive bootstrap file that will be generated during bootstrap and will be used to create the empty DSearch Xhive Database.
  • xhive-data-directory => defines the path of the data folder that will be used by the Xhive bootstrap file. This will therefore be the future location of the DSearch Xhive Database.

 

As you probably understood, to change the data folder, you just have to adjust the value of the parameter “xhive-data-directory” to point to the new location: /app/xPlore/test-data.

 

When this is done, the third step is to change the Lucene temp path:

[xplore@xplore_server_01 ~]$ cat /app/xPlore/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/xdb.properties
xdb.lucene.temp.path=/app/xPlore/test-data/temp

 

In this file, xdb.lucene.temp.path defines the path for temporary uncommitted indexes. Therefore it will just be used for temporary indexes but it is still a good practice to change this location since it’s also talking about the data of the DSearch and it helps to keep everything consistent.

 

Then the next step is to clean the cache and restart the DSearch. You can use your custom start/stop script if you have one or use something like this:

[xplore@xplore_server_01 ~]$ rm -rf /app/xPlore/jboss7.1.1/server/DctmServer_*/tmp/work/*
[xplore@xplore_server_01 ~]$ sh -c "cd /app/xPlore/jboss7.1.1/server;nohup ./startPrimaryDsearch.sh & sleep 5;mv nohup.out nohup-PrimaryDsearch.out"

 

Once done, just verify in the log file generated by the start command (for me: /app/xPlore/jboss7.1.1/server/nohup-PrimaryDsearch.out) that the DSearch has been started successfully. If that’s true, then you can also start the IndexAgent:

[xplore@xplore_server_01 ~]$ sh -c "cd /app/xPlore/jboss7.1.1/server;nohup ./startIndexagent.sh & sleep 5;mv nohup.out nohup-Indexagent.out"

 

And here we are, the Xhive Database is now located under the “test-data” folder!

 

 

Additional note: As said at the beginning of this blog, it is also possible to recreate an empty Xhive Database and change its location at the same time. Doing a recreation of am empty DB will result in the same thing as the steps above BUT you will have to perform a full reindexing which will take a lot of time if this isn’t a new installation (the more documents are indexed, the more time it will take)… To perform this operation, the steps are mostly the same and are summarized below:

  1. Backup the data and config folders
  2. Remove all files inside the config folder except the indexserverconfig.xml
  3. Create a new (empty) data folder with a different name like “test-data” or “new-data” or…
  4. Update the file indexserver-bootstrap.properties with the reference to the new path
  5. Update the file xdb.properties with the reference to the new path
  6. Clean the cache and restart the DSearch+IndexAgents

Basically, the steps are exactly the same except that you don’t need to update the files indexserverconfig.xml and XhiveDatabase.bootstrap. The first one is normally updated by the DSearch automatically and the second file will be recreated from scratch using the right data path thanks to the update of the file indexserver-bootstrap.properties.

 

Have fun :)

 

Cet article Documentum story – Change the location of the Xhive Database for the DSearch (xPlore) est apparu en premier sur Blog dbi services.

CDB resource plan: shares and utilization_limit

Yann Neuhaus - Mon, 2016-10-17 16:00

I’m preparing some slides about PDB security (lockdown) and isolation (resource) for DOAG and as usual I’ve more info to share than what can fit in 45 minutes. In order to avoid the frustration of removing slides, I usually share them in blog posts. Here is the basic concepts of CDB resource plans in multitenant: shares and resource limit.

The CDB resource plan is mainly about CPU. It also governs the degree when in parallel query and the I/O when on Exadata, but the main resource is the CPU: sessions that are not allowed to used more CPU will wait on ‘resmgr: cpu quantum’. In a cloud environment where you provision a PDB, like in the new Exadata Express Cloud Service, you need to ensure that one PDB do not take all CDB resources, but you also have to ensure that resources are fairly shared.

resource_limit

Let’s start with resource limit. This do not depend on the number of PDB: it is defined as a percentage of the CDB resources. Here I have a CDB with two PDBs and I’ll run a workload on one PDB only. I run 8 sessions, all cpu bound, on PDB1.

I’ve defined a CDB resource plan that sets the resource_limit to 50% for PDB1:

CURRENT_TIMESTAMP PLAN PLUGGABLE_DATABASE DIRECTIVE_TYPE SHARES UTIL
------------------------------------ ------------ ------------------------- ------------------------------ ---------- ----------
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN ORA$AUTOTASK AUTOTASK 90
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE DEFAULT_DIRECTIVE 1 100
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN PDB1 PDB 1 50
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN PDB2 PDB 1 100

This is an upper limit. I’ve 8 CPUs so PDB1 will be allowed to run only 4 sessions in CPU at a time. Here is the result:

CDB_RESOURCE_PLAN_1_PDB_1_SHARE_50_LIMIT

What you see here is that when more than the allowed percentage has been used the sessions are scheduled out of CPU and wait on ‘resmgr: cpu quantum’. And the interesting thing is that they seem to be stopped all at the same time:

CDB_RESOURCE_PLAN_1_PDB_1_SHARE_50_LIMIT-2

This make sense because the suspended sessions may hold resources that are used by others. However, this pattern does not reproduce for any workload. More work and future blog posts are probably required about that.

Well, the goal here is to explain that resource_limit is there to define a maximum resource usage. Even if there is no other activity, you will not be able to use all CDB resources if you have a resource limit lower than 100%.

Shares

Share are there for the opposite reason: guarantee a minimum of ressources to a PDB.
However, the unit is not the same. It cannot be the same. You cannot guarantee a percentage of CDB ressources to one PDB because you don’t know how many other PDBs you have. Let’s say you have 4 PDBs and you want to have them equal. You want to define a minimum of 25% percent for each. But then, what happens when a new PDB is created? You need to change all 25% to 20%. To avoid that, the minimum ressources is allocated by shares. You give shares to each PDB and they will get a percentage of ressources calculated from their share divided by the total number of shares.

The result is that when there is not enough ressources in the CDB to run all the sessions, then the PDBs that use more than their share will wait. Here is an example where PDB1 has 2 shares and PDB2 has 1 share, which means that PDB1 will get at least 66% of ressources and PDB2 at least 33%:

CURRENT_TIMESTAMP PLAN PLUGGABLE_DATABASE DIRECTIVE_TYPE SHARES UTIL
------------------------------------ ------------ ------------------------- ------------------------------ ---------- ----------
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN ORA$AUTOTASK AUTOTASK 90
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE DEFAULT_DIRECTIVE 1 100
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN PDB1 PDB 2 100
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN PDB2 PDB 1 100

Here is the ASH on each PDB when I run 8 CPU-bound sessions on each. System is saturated because I have only 8 CPUs.

CDB_RESOURCE_PLAN_2_PDB_2_SHARE_100_LIMIT-PDB1

CDB_RESOURCE_PLAN_2_PDB_1_SHARE_100_LIMIT-PDB2

Because of the shares difference (2 shares for PDB1 and 1 share for PDB2) PDB1 has been able ti use more CPU than PDB2 when the system was saturated:
PDB1 was 72% in cpu and 22% waiting, PDB2 was 50% in cpu and 50% waiting.

CDB_RESOURCE_PLAN_2_PDB_1_SHARE_100_LIMIT-SUM

In order to illustrate what changes when the system is saturated, I’ve run 16 sessions on PDB1 and then, after 60 seconds, 4 sessions on PDB2.

Here is the activity of PDB1:

CDB_RESOURCE_PLAN_SHARE_PDB1

and PDB2:

CDB_RESOURCE_PLAN_SHARE_PDB2

At 22:14 PDB1 was able to use all available CPU because there is no utilization_limit and no other PDB have activity. The system is saturated, but from PDB1 only.
At 22:15 PDB has also activity, so the resource manager must limit PDB1 in order to give ressources to PDB2 proportionally to its share. PDB1 with 2 shares are guaranteed to be able to use 2/3 of cpu. PDB1 with 1 share is guaranteed to use 1/3 of it.
At 22:16 PDB1 activity has completed, so PDB2 can use more resources. The 4 sessions are lower than the available cpu, so the system is not saturated and there is no wait.

What to remember?

Shares are there to guarantee a minimum of ressources utilization when system is saturated.
Resource_limit is there to set a maximum of resource utilization, whether the system is saturated or not.

 

Cet article CDB resource plan: shares and utilization_limit est apparu en premier sur Blog dbi services.

truncate vs delete with constraints on tables

Tom Kyte - Mon, 2016-10-17 14:46
Tom, I haven't used constraints as much as this new project that I am on. Usually I truncate tables to clear them out as it frees up the space etc. Yet when I tried to truncate the tables in RF order, I received constraint errors and it took a ...
Categories: DBA Blogs

Question / Answer table structure design

Tom Kyte - Mon, 2016-10-17 14:46
Hi, We are building a facility that allows our customers to configure questions which there customers in turn can then answer. These questions can be configured in various ways which determines what is / is not a valid answer and importantly (to t...
Categories: DBA Blogs

Find all queries in application that use string literal

Tom Kyte - Mon, 2016-10-17 14:46
Hi , We have an application in Java that uses Oracle as the database. The current performance is not great, and the DBA have identified a few reasons, one of which is SQLs having string literals instead of bind variables. If we want to change t...
Categories: DBA Blogs

how to track modification of records on a table

Tom Kyte - Mon, 2016-10-17 14:46
Hi Tom, My existing functionality is having Triggers on about 15 tables for insert/update/delete. The modified rows are inserted into a target table. However, I have been asked to use a different functionality (good performance) to track th...
Categories: DBA Blogs

Insert performance issue

Tom Kyte - Mon, 2016-10-17 14:46
Hi Tom, We have an application to load data into database, It uses insert statements to insert data into tables. Each table contains nearly 220 colums. insert into table1(col1,col2,col3,col6,col7,col8................col220)values(1,2,3,6,7...
Categories: DBA Blogs

Data insertion strategy in normalized tables

Tom Kyte - Mon, 2016-10-17 14:46
Hi Team, We have many tables (master tables) having primary and foreign key relationships. These tables (normalized) contain static data (master data). Inserting data manually in these tables is a tedious task because if we insert data out of ord...
Categories: DBA Blogs

export each XML Message to separate .xml files from Databases

Tom Kyte - Mon, 2016-10-17 14:46
Hi , We store specific information as XML field in Oracle database I have specific Query which has results in xml's already as each filed stores as xml file. I wanted to export each field as XML file select REQEmployeeXML from User.Employe...
Categories: DBA Blogs

How to acheive the output in the aligned format ?

Tom Kyte - Mon, 2016-10-17 14:46
The following .sql file is called by the .sh shell script. connect username/password@sidname SET SERVEROUTPUT ON; SET LINESIZE 4600; #SET TRIMSPOOL ON; SET WRAP OFF; SET HEADING OFF; #SET TRIMOUT ON; SET TIMING ON; SET FEEDBACK ON; SET SPOO...
Categories: DBA Blogs

Partner TekTalk Webinar: Process Lifeycycle for Procurement

WebCenter Team - Mon, 2016-10-17 14:31
Webinar: Process Lifeycycle for Procurement

TekTalk Webinar:  Process Lifecycle for Procurement

Leveraging the Cloud to Accelerate Your Procurement Process, Approvals and Reduce Errors

Wed Oct 26 at 1:00 PM ESTEffective communication and coordination between Procurement, Legal, Finance and vendors is key to smoothly acquire products and services. The procurement process lifecycle can be annoyingly complicated and often experiences frustrating delays, especially when you have to manage massive volumes of active procurements and incoming requests with limited resources. 

Join the webinar to learn how you can:
  • Streamline your entire procurement process from initiating procurement request, managing RFx to contract negotiation and managing the award process
  • Effectively evaluate vendors by creating requirement documents, and schedule evaluations to score and rate each vendor all within one system
  • Optimize your procurement process by expanding your supplier network to dial down costs
  • Enable dynamic routing of procurement forms and documents across your enterprise for productive collaboration

Register now to learn how you can gain total control of your procurement process. Request, evaluate, negotiate, approve, and renew with ease.


For more information, please contact info@tekstream.com or call 844-TEK-STRM 
blank Tweet This blank Send to Linkedin blank Send to Facebook

Who will leave their Apple Watch to their grandchildren, Romania?

Usable Apps - Mon, 2016-10-17 09:40

Just back from some very hot Oracle Applications User Experience (OAUX) outreach and enablement in that part of the world known as the Silicon Valley of Transylvania: Romania.

I teamed up with our Bucharest-based Cloud User Experience (UX) Program Manager Ana Tomescu (@annatomescu) and other local Oracle teams to coordinate our presence, which was all about maximizing the Oracle Cloud and SaaS UX message and changing any lingering perceptions about Oracle being only the database company.

Topics at TechHub Bucharest

Word Cloud UX: Our outreach in Romania covered many areas. Users are always at the center of what we do.

We took to the stage at the Great People Inside Conference: The New World of Work event in Brașov, Romania, to deliver a keynote presentation about the functionality, form, fitness, and fashion trends of the digital user experience needed for today’s digital workforce.

 The Dress Code of the Digital Workforce

Fashion and Technology: The Dress Code of the Digital Workforce. The title of this blog post is taken from this slide.

The Great People Inside conference is the largest of its type for HR professionals in Romania, and it focuses on the world of work, so it was an ideal platform for people to hear about the Human Capital Management (HCM) benefits of the Oracle Cloud UX strategy and innovation.

The keynote presentation concluded with digital UX adoption observations and takeaways, and then we joined in a panel with Monica Costea, Oracle HCM Senior Solutions Consultant for the region, and other speakers to answer questions from a packed audience.

Increasing Digital UX Adoption

Increasing digital UX adoption: from fashion to fitness to functionality . . . 

The panel was asked about whether Toyota's use of Kirobo, the smart car-based robot in Japan, could apply to other markets, about how to achieve a balance between technology and fitness (Pokémon Go is a fine example), about the 360-degree capabilities of HR applications, and more. The English-to-Romanian translator working in real time deserves a prize for my use of Uncanny Valley.

Monica Costea, Ultan O'Broin, and Ana Tomescu

Monica Costea, Ultan O'Broin, and Ana Tomescu testing remote selfie capabilities

In return, I asked the audience if fitness and wellness programs were a feature of enterprise offerings in Romania (an emergent one it seems), and how many people in the audience played Pokémon GO (seems people were shy, but I am assured they do!).

Earlier that same week we were honored to bring the Powerful Tech Team: The Cloud and Wearable Tech Experience event to Bucharest’s awesome TechHub with co-host Vector. A lively evening there featured panel discussions on Cloud UX strategy and innovation trends and how to increase wearable tech adoption and engagement, followed by a couple of guerrilla pitches to the gathered turnout by local startups Endtest and viLive.

Follow the smart money, the smart people, and what they're saying

Influencers: Follow the smart money, smart people, and what they're saying.

Interaction with the TechHub audience included discussions on user research, data accuracy, context, Big Data and visualizations, the importance of cutting-edge and fashionable design, how emerging tech such as AI, machine learning, and VR can be integrated with the wearables experience, and more. We got to experience the excitement and energy of the Bucharest startup scene, make some new contacts, and get some new ideas for further "boots on the ground" UX activities.

Definitely, there is an opportunity for UX and design meetups in this amazing city of tech innovators, entrepreneurs, and developers. 

So, we covered two great events in the same week that covered the best of Digital Romania and the OAUX communications and outreach charter of UX storytelling and enablement in action for the world of work.

Souvenirs of Transylvania

Some downtime in Transylvania. There's always storytelling and a Vlad involved with UX . . . :) 

It was not my first time in Romania, and it won’t be the last! Oracle is making a huge investment in some very smart people in Romania, and we’re eager to be part of their successes.

We’re looking forward to more UX events in Romania and the ECEMEA region. Stay tuned to the Usable Apps website events section.

Pages

Subscribe to Oracle FAQ aggregator