The first technical preview of the future version of Windows Server was, since last October, available (here) and a second one with more new features should be available in May.
The final version which should be normally released in 2015 has been recently postponed to 2016. It will be the first time that the client and server releases will be decoupled.
This new version of Windows Server will include:
- new and changed functionalities for Hyper-V
- improvements for Remote Desktop Services
- new and updated functionalities for Failover Clustering
- significant new features with PowerShell 5.0
- directory services, Web application proxy and other features
According to Microsoft employee, Jeffrey Snover, the next version of Windows Server has been deeply refactoring to really build a Cloud-optimized server! A server which is deeply refactored for a cloud scenario.
The goal is to scope out my needs in order to use only the required components.
On top of this Cloud-Optimized server, the server will be build, the same server that we have for the moment, compatible with that we have but with two application profiles:
- the first application profile which will target the existing set of APIs server
- the second will be a subset of APIs which will be cloud-optimized
Microsoft works also to further clarify the difference between Server and Client to avoid making a mix between client APIs and server APIs for example.
Microsoft will also introduce Docker containers to his new Windows Server 2016! Container is a compute environment also called compute container.
We will have two flavors of compute containers:
- one for application compatibility (server running in a container)
- a second optimized for the cloud (cloud-optimized server)
The goal of Docker is to embed an application into a virtual container. Application via the container will be able to be executed without any problem on Windows or Linux servers. This technology will facilitate the deployment of application and is offered as Open Source under apache license by an American company called Docker.
A container is very lightweight as it does not contain its own operation system. In fact, it will use the host machine in order to achieve all of the system calls.
Migration of Docker containers will be easier as their weight are small.
The bigger clouds providers like Amazon on AWS, Microsoft on Azure, Google on Google Compute, have already integrated this new technology... Dealing with Docker containers give the possibility to migrate from one cloud to another one easily.
In addition, the Docker container technology which will come with Windows Server 2016 will be part of a set of application deployment services, called Nano Server.
According to an internal presentation of WZor published by Microsoft, Nano Server is presented as “The future of Windows Server”.
Nano Server will be a zero-footprint model, server roles and optional features will reside outside of it. No binaries or metadata in the image, it will be just standalone packages.
Hyper-V, Clustering, Storage, Core CLR, ASP.NET V.Next, PaaS v2, containers will be part of the new roles and features.
The goal will be also to change the mentality of servers management. Tend towards remote management and process automation via Core PowerShell and WMI. In order to facilitate remote management, local tools like Task manager, Registry editor, Event viewer... will be replaced by web-based tools and accessible via a remote connection.
This new solution will be integrated also in Visual Studio.
In conclusion, WZor summarized Nano Server as “a nucleus of next-gen cloud infrastructure and applications”. This shows the direction that Microsoft wants to give to Windows Server 2016: even better integration to the cloud, optimization for new distributed applications and management facilitation.
- Partner Webcast - Oracle Mobile Application Framework 2.1: Update Overview (Oracle Partner Hub: ISV Migration Center Team)
via Oracle Partner Hub: ISV Migration Center Team https://blogs.oracle.com/imc/
- Getting Started with RCU 12c for SOA 12c (Oracle Partner Hub: ISV Migration Center Team)
via Oracle Partner Hub: ISV Migration Center Team https://blogs.oracle.com/imc/
The first post in this series explained how to get ppas installed on a linux system. Now that the database cluster is up and running we should take care immediately about backup and recovery. For this I'll use another system where I'll install and configure bart. So, the system overview for now is:server ip address purpose ppas 192.168.56.243 ppas database cluster ppasbart 192.168.56.245 backup and recovery server
As bart requires the postgres binaries I'll just repeat the ppas installation on the bart server. Check the first post on how to do that.
tip: there is a "--extract-only" switch which only extracts the binaries without bringing up a database cluster.
After that just install the bart rpm:
yum localinstall edb-bart-1.0.2-1.rhel6.x86_64.rpm
All the files will be installed under:
ls -la /usr/edb-bart-1.0/ total 20 drwxr-xr-x. 4 root root 44 Apr 23 13:41 . drwxr-xr-x. 14 root root 4096 Apr 23 13:41 .. drwxr-xr-x. 2 root root 17 Apr 23 13:41 bin drwxr-xr-x. 2 root root 21 Apr 23 13:41 etc -rw-r--r--. 1 root root 15225 Jan 27 15:24 license.txt
Having a dedicated user for bart is a good idea:
# groupadd bart # useradd -g bart bart # passwd bart Changing password for user bart. New password: Retype new password: $passwd: all authentication tokens updated successfully.
As backups need some space a top level directory for all the bart backups needs to be created:
# mkdir /opt/backup chown bart:bart /opt/backup chmod 700 /opt/backup mkdir -p /opt/backup/ppas94/archived_wals
Now everything is in place to start the bart configuration. A minimal configuration file would look like this:
cat /usr/edb-bart-1.0/etc/bart.cfg [BART] bart-host = firstname.lastname@example.org backup_path = /opt/backup pg_basebackup_path = /opt/PostgresPlus/9.4AS/bin/pg_basebackup logfile = /var/tmp/bart.log xlog-method = fetch [PPAS94] host = 192.168.56.243 port = 5444 user = enterprisedb description = "PPAS 94 server"
The BART section is the global section while the next sections are specific to the database clusters to backup and restore. As bart requires passwordless ssh authentication between the bart host and the database host to be backup up lets setup this. On the bart bart host ( ppasbart ):
su - bart ssh-keygen -t rsa
On the host where database runs ( ppas ):
su - cd /opt/PostgresPlus/9.4AS mkdir .ssh chown enterprisedb:enterprisedb .ssh/ chmod 700 .ssh/ su - enterprisedb ssh-keygen -t rsa
As the public keys are now available we'll need to make them available on each host. On the ppas host:
cat .ssh/id_rsa.pub > .ssh/authorized_keys chmod 600 .ssh/authorized_keys
Add the public key from the barthost to the authorized keys file above. Example: get the public key from the bart host:
[bart@ppasbart ~]$ id uid=1001(bart) gid=1001(bart) groups=1001(bart) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 [bart@ppasbart ~]$ cat .ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN document.write(['bart','ppasbart.loca'].join('@'))l
Copy/paste this key into the authorized_keys file for the enterprisedb user on the database host, so that the file looks similar to this:
cat .ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN l [bart@ppasbart ~]$ cat .ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQZWeegLpqVB20c3cIN0Bc7pN6OjFM5pBsunDbO6SQ0+UYxZGScwjnX9FSOlmYzqrlz62jxV2dOJBHgaJj/mbFs5XbmvFw6Z4Zj224aBOXAfej4nHqVnn1Tpuum4HIrbsau3rI+jLCNP+MKnumwM7JiG06dsoG4PeUOghCLyFrItq2/uCIDHWoeQCqqnLD/lLG5y1YXQCSR4VkiQm62tU0aTUBQdZWnvtgskKkHWyVRERfLOmlz2puvmmc5YxmQ5XBVMN5dIcIZntTfx3JC3imjrUl10L3hkiPkV0eAt3KtC1M0n9DDao3SfHFfKfEfp5p69vvpZM2uGFbcpkQrtN l ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN
Make the file the same on the bart host and test if you can connect without passwords:
[bart@ppasbart ~]$ hostname ppasbart.local [bart@ppasbart ~]$ ssh bart@ppasbart Last login: Thu Apr 23 14:24:39 2015 from ppas [bart@ppasbart ~]$ logout Connection to ppasbart closed. [bart@ppasbart ~]$ ssh enterprisedb@ppas Last login: Thu Apr 23 14:24:47 2015 from ppas -bash-4.2$ logout Connection to ppas closed.
Do the same test on the ppas host:
bash-4.2$ hostname ppas.local -bash-4.2$ ssh bart@ppasbart Last login: Thu Apr 23 14:22:07 2015 from ppasbart [bart@ppasbart ~]$ logout Connection to ppasbart closed. -bash-4.2$ ssh enterprisedb@ppas Last login: Thu Apr 23 14:22:18 2015 from ppasbart -bash-4.2$ logout Connection to ppas closed. -bash-4.2$
Once this works we need to setup a replication user in the database being backed up. So create the user in the database which runs on the ppas host (I'll do that with enterprise user instead of the postgres user as we'll need to adjust pg_hba.conf file right after creating the user):
[root@ppas 9.4AS]# su - enterprisedb Last login: Thu Apr 23 14:25:50 CEST 2015 from ppasbart on pts/1 -bash-4.2$ . pgplus_env.sh -bash-4.2$ psql -U enterprisedb psql.bin (184.108.40.206) Type "help" for help. edb=# CREATE ROLE bart WITH LOGIN REPLICATION PASSWORD 'bart'; CREATE ROLE edb=# exit -bash-4.2$ echo "host all bart 192.168.56.245/32 md5" >> data/pg_hba.conf
Make sure that the IP matches your bart host. Then adjust the bart.cfg file on the bart host to match your configuration:
cat /usr/edb-bart-1.0/etc/bart.cfg [BART] bart-host = email@example.com backup_path = /opt/backup pg_basebackup_path = /opt/PostgresPlus/9.4AS/bin/pg_basebackup logfile = /var/tmp/bart.log xlog-method = fetch [PPAS94] host = 192.168.56.243 port = 5444 user = bart remote-host = firstname.lastname@example.org description = "PPAS 94 remote server"
Another requirement is that the bart database user must be able to connect to the database without prompting for a password. Thus we create the .pgpass file on the bart host which is used for reading the password:
[bart@ppasbart ~]$ cat .pgpass 192.168.56.243:5444:*:bart:bart [bart@ppasbart ~]$ chmod 600 .pgpass
As a last step we need to enable wal archiving on the database that should be backed up. The following parameters need to be set in the postgresql.conf file:
wal_level = archive # or higher archive_mode = on archive_command = 'scp %p email@example.com:/opt/backup/ppas94/archived_wals/%f' max_wal_senders = 1 # or higher
Once done restart the database cluster:
su - service ppas-9.4 restart
Lets see if bart can see anything on the bart server:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg SHOW-SERVERS -s PPAS94 Server name : ppas94 Host name : 192.168.56.243 User name : bart Port : 5444 Remote host : firstname.lastname@example.org Archive path : /opt/backup/ppas94/archived_wals WARNING: xlog-method is empty, defaulting to global policy Xlog Method : fetch Tablespace path(s) : Description : "PPAS 94 remote server"
Looks fine. So lets do a backup:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg BACKUP -s PPAS94 INFO: creating backup for server 'ppas94' INFO: backup identifier: '1429795268774' WARNING: xlog-method is empty, defaulting to global policy 56357/56357 kB (100%), 1/1 tablespace INFO: backup checksum: 6e614f981902c99326a7625a9c262d98 INFO: backup completed successfully
Cool. Lets see what is in the backup catalog:
[root@ppasbart tmp]# ls -la /opt/backup/ total 0 drwx------. 3 bart bart 19 Apr 23 15:02 . drwxr-xr-x. 4 root root 38 Apr 23 13:49 .. drwx------. 4 bart bart 46 Apr 23 15:21 ppas94 [root@ppasbart tmp]# ls -la /opt/backup/ppas94/ total 4 drwx------. 4 bart bart 46 Apr 23 15:21 . drwx------. 3 bart bart 19 Apr 23 15:02 .. drwx------. 2 bart bart 36 Apr 23 15:21 1429795268774 drwx------. 2 bart bart 4096 Apr 23 15:21 archived_wals [root@ppasbart tmp]# ls -la /opt/backup/ppas94/1429795268774/ total 56364 drwx------. 2 bart bart 36 Apr 23 15:21 . drwx------. 4 bart bart 46 Apr 23 15:21 .. -rw-rw-r--. 1 bart bart 33 Apr 23 15:21 base.md5 -rw-rw-r--. 1 bart bart 57710592 Apr 23 15:21 base.tar [root@ppasbart tmp]# ls -la /opt/backup/ppas94/archived_wals/ total 81928 drwx------. 2 bart bart 4096 Apr 23 15:21 . drwx------. 4 bart bart 46 Apr 23 15:21 .. -rw-------. 1 bart bart 16777216 Apr 23 15:10 000000010000000000000002 -rw-------. 1 bart bart 16777216 Apr 23 15:13 000000010000000000000003 -rw-------. 1 bart bart 16777216 Apr 23 15:20 000000010000000000000004 -rw-------. 1 bart bart 16777216 Apr 23 15:21 000000010000000000000005 -rw-------. 1 bart bart 16777216 Apr 23 15:21 000000010000000000000006 -rw-------. 1 bart bart 304 Apr 23 15:21 000000010000000000000006.00000028.backup
Use the SHOW-BACKUPS switch to get on overview of the backups available:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg SHOW-BACKUPS Server Name Backup ID Backup Time Backup Size ppas94 1429795268774 2015-04-23 15:21:23 55.0371 MB ppas94 1429795515326 2015-04-23 15:25:18 5.72567 MB ppas94 1429795614916 2015-04-23 15:26:58 5.72567 MB
A backup without a restore proves nothing so lets try to restore one of the backups to the ppas server to a different directory:
[root@ppas 9.4AS]# mkdir /opt/PostgresPlus/9.4AS/data2 [root@ppas 9.4AS]# chown enterprisedb:enterprisedb /opt/PostgresPlus/9.4AS/data2
On the ppasbart host do the restore:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg RESTORE -s PPAS94 -i 1429795614916 -r enterprisedb@ppas -p /opt/PostgresPlus/9.4AS/data2 INFO: restoring backup '1429795614916' of server 'ppas94' INFO: restoring backup to enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2 INFO: base backup restored INFO: archiving is disabled INFO: backup restored successfully at enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2
Looks good. Lets see what is in the data2 directory on the ppas host:
[root@ppas 9.4AS]# ls /opt/PostgresPlus/9.4AS/data2 backup_label dbms_pipe pg_clog pg_hba.conf pg_log pg_multixact pg_replslot pg_snapshots pg_stat_tmp pg_tblspc PG_VERSION postgresql.auto.conf base global pg_dynshmem pg_ident.conf pg_logical pg_notify pg_serial pg_stat pg_subtrans pg_twophase pg_xlog postgresql.conf [root@ppas 9.4AS]# ls /opt/PostgresPlus/9.4AS/data2/pg_xlog 000000010000000000000008 archive_status
Looks good, too. As this is all on the same server we need to change the port before bringing up the database:
-bash-4.2$ grep port postgresql.conf | head -1 port = 5445 # (change requires restart) -bash-4.2$ pg_ctl start -D data2/ server starting -bash-4.2$ 2015-04-23 16:01:30 CEST FATAL: data directory "/opt/PostgresPlus/9.4AS/data2" has group or world access 2015-04-23 16:01:30 CEST DETAIL: Permissions should be u=rwx (0700).
Ok, fine. Change it:
-bash-4.2$ chmod 700 /opt/PostgresPlus/9.4AS/data2 -bash-4.2$ pg_ctl start -D data2/ server starting -bash-4.2$ 2015-04-23 16:02:00 CEST LOG: redirecting log output to logging collector process 2015-04-23 16:02:00 CEST HINT: Future log output will appear in directory "pg_log".
Seems ok, lets connect:
-bash-4.2$ psql -p 5445 -U bart Password for user bart: psql.bin (220.127.116.11) Type "help" for help. edb=> l List of databases Name | Owner | Encoding | Collate | Ctype | ICU | Access privileges -----------+--------------+----------+-------------+-------------+-----+------------------------------- edb | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | postgres | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | template0 | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | =c/enterprisedb + | | | | | | enterprisedb=CTc/enterprisedb template1 | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | =c/enterprisedb + | | | | | | enterprisedb=CTc/enterprisedb (4 rows)
Cool. Works. But: archiving is disabled and you'll need to enable it again. This is the default behavior of bart as it adds "archive_mode=off" to the end of the postgressql.conf. But take care that you adjust the archive_command parameter as all archived wals will be scp'ed to the same directory on the ppasbart server as the original database did. Can we do a point in time recovery? Let's try (I'll destroy the restored database cluster and will use the same data2 directory ):
-bash-4.2$ pg_ctl -D data2 stop -m fast waiting for server to shut down.... done server stopped -bash-4.2$ rm -rf data2/* -bash-4.2$
Lets try the restore to a specific point in time:
[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg RESTORE -s PPAS94 -i 1429795614916 -r enterprisedb@ppas -p /opt/PostgresPlus/9.4AS/data2 -g '2015-04-03 15:23:00' INFO: restoring backup '1429795614916' of server 'ppas94' INFO: restoring backup to enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2 INFO: base backup restored INFO: creating recovery.conf file INFO: archiving is disabled INFO: backup restored successfully at enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2
Seems ok, but what is the difference? When specifying a point in time a recovery.conf file will be created for the restored database cluster:
-bash-4.2$ cat data2/recovery.conf restore_command = 'scp -o BatchMode=yes -o PasswordAuthentication=no email@example.com:/opt/backup/ppas94/archived_wals/%f %p' recovery_target_time = '2015-04-03 15:23:00'
Lets start the database (after changing the port again in postgresql.conf):
-bash-4.2$ pg_ctl -D data2 start server starting -bash-4.2$ 2015-04-23 16:16:12 CEST LOG: redirecting log output to logging collector process 2015-04-23 16:16:12 CEST HINT: Future log output will appear in directory "pg_log".
Are we able to connect?
-bash-4.2$ psql -U bart -p 5445 Password for user bart: psql.bin (18.104.22.168) Type "help" for help. edb=>
Works, too. So now we have a central backup server for our postgresql infrastructure from which backups and restores can be executed. Combine this with a backup software (like netbackup, etc) which picks up the backups from the bartserver and you should be fine. in the next post we'll setup a hot standby database server.
Enhancements in AD and TXK Delta 6
4. New and Changed FeaturesOracle E-Business Suite Technology Stack and Oracle E-Business Suite Applications DBA contain the following new or changed features in R12.AD.C.Delta.6 and R12.TXK.C.Delta.6.4.1 Support for single file system development environments
- A normal Release 12.2 online patching environment requires one application tier file system for the run edition, and another for the patch edition. This dual file system architecture is fundamental to the patching of Oracle E-Business Suite Release 12.2 and is necessary for production environments and test environments that are meant to be representative of production. This enhancement makes it possible to have a development environment with a single file system, where custom code can be built and tested. A limited set of adop phases and modes are available to support downtime patching of such a development environment. Code should then be tested in standard dual file system test environments before being applied to production.
Support for Single File System Development Environments
A normal Release 12.2 online patching environment requires two application tier file systems, one for the run edition and another for the patch edition. This dual file system architecture is fundamental to patching of Oracle E-Business Suite Release 12.2, and is necessary both for production environments and test environments that are intended to be representative of production. This feature makes it possible to create a development environment with a single file system, where custom code can be built and tested. The code should then always be tested in a standard dual file system test environment before being applied to production.
You can set up a single file system development environment by installing Oracle E-Business Suite Release 12.2 in the normal way, and then deleting the $PATCH_BASE directory with the command:
A limited set of adop phases and modes are available to support patching of a single file system development environment. These are:
$ rm -rf $PATCH_BASE
· apply phase in downtime mode· cleanup phaseSpecification of any other phase or mode will cause adop to exit with an error.
The following restrictions apply to using a single file system environment:
· You can only use a single file system environment for development purposes.· You cannot use online patching on a single file system environment.· You can only convert an existing dual file system environment to a single file system: you cannot directly create a single file system environment via Rapid Install or cloning.· There is no way to convert a single file system environment back into a dual file system.
· You cannot clone from a single file system environment.
Oracle is progressively moving their products to a new exciting user experience called Oracle Alta.This new interface style optimizes the user interface for both desktop and mobile platforms with a unified user experience. The features of the new interface are too numerous to mention but here is a summary of the Oracle Alta implementation in Oracle Utilities Application Framework V22.214.171.124.1 and above:
- The user interface is clearer and with a more modern look and feel. An example is shown below:
- The implementation of Oracle Alta for Oracle Utilities uses the Oracle Jet version of the Alta interface which was integrated into the Oracle Utilities Application Framework rendering engine.
- For easier adoption, the existing product screens have been converted to Alta with as small amount of changes as possible. This means training for adoption is minimized and helps existing customers adopt the new user interface quicker. Over subsequent releases new user experiences will be added to existing screens or new screens to take full advantage of the user experience.
- There are a few structural changes on the screens to improve the user experience as part of the Alta adoption:
- The fly-out menu on left in past releases has been replaced with a new menu toolbar. The buttons that appear on the toolbar will depend on the services that user is connected to on their security definition. An example of the toolbar is shown below:
- User preferences and common user functions are now on a menu attached to the user. For example:
- Portals and Zones now have page actions attached in the top right of their user interfaces (the example at the top of this article illustrates an example of this behavior). The buttons displayed are dynamic will vary from zone to zone, portal to portal and user to user depending on the available functions and the users security authorizations.
- In query portals, searches can now be saved as named. In past releases, it was possible to only change the default view for an alternative. It is now possible to alter the criteria, column sequencing, column sorting and column view for a query view and save that as a named search to jump to. It is possible to have multiple different views of the same query zone available from a context menu. The end user can build new views, alter existing views or remove views as necessary. All of this functionality is security controlled to allow sites to define what individual users can and cannot do. Also, views can be inherited from template users in line with bookmarks, favorites etc. An example of the saved view context menu is shown below:
- Menu's have changed. In the past the menu item supported the Search action (the default action) or "+" to add a new record. In Alta, these are now separate submenu's. For example:
- Page titles have been moved to the top of zones to improve usability. The example at the top of this article illustrates this point. The User page title used to be above zone in the middle, now it is up the top left of the portal or zone.
- Bookmarking has been introduced. This is akin to browser bookmarking where the page and the context for that page are stored with the bookmark for quick traversal. The Bookmark button will appear on pages that can be bookmarked.
- The new user interface allows Oracle Utilities products to support a wide range of browsers and client platforms including mobile platforms. Refer to the Installation Guides for each product to find the browsers and client platforms supported at the time of release.
This article is just a summary of the user interface changes. There will other articles in the future to cover user interface aspects, and other enhancements, in more detail.
The Technical best Practices and Batch Best Practices whitepapers have been updated with new and changed advice for Oracle Utilities Application Framework V126.96.36.199.1. Advice for previous versions of Oracle Utilities Application Framework have been included as well.
The whitepapers are available from My Oracle Support at the following document ids:
I came across Evodesk Standing Desk Review
Could not resist the temptation but reach out to @TreadmillDesker
You bark loud but how is ThermoDesk ELITE better EVODESK other than motor? $477=1333-886 is a lot for a motor. Let’s see pic.
Here is the respond I got back.
Thanks for the question! lab test of the Evo’s base concluded: slower speed, louder motors, and instability at taller heights.
Notice the respond was very vague. Slower speed comparing to? Louder motors comparing to? Instability comparing to?
Keep in mind @TreadmillDesker recommends ThermoDesk ELITE which it has an affiliation with and wonder if there is a bias here.
In my opinion, if a website is to perform a review and critique other products it should provide sufficient data.
Videos and pictures would be great.
Here’s is another marketing gimmick from @TreadmillDesker
Did You Know Office Fitness Can Be Tax Deductible?
Looks like I pinched another nerve and responded.
@TreadmillDesker you need to be clear that tax deductible is not the same as tax deduction. please don’t use tax deductible to lure people.
An item that is tax deductible means it may be included in the expense for a possible tax deduction but does not necessitate a tax deduction.
First, the individual would need to itemized. Next, only expenses exceeding 2% of AGI qualifies.
The likelihood of one getting a tax deduction is less than 10% and it’s not a truly a full deduction.
You might ask, what qualifies me to make this assessment. I am a retired Tax Advisor after 19 yrs of experience.
Don’t get me wrong, I really like ThermoDesk ELITE and in all fairness, the review was “perhaps most entertaining”
If there were compatibilities between the 2 companies, I would buy components from both to build the desk.
Lastly, here is a price comparison for the two desks,
ThermoDesk ELITE has a 50+% price premium but is there a 50% increase in performance, product, or quality?
Desktop Size: 30×72
ThermoDesk ELITE – Electric 3D-Adjustable Desk with 72″ Tabletop
Disclaimer: I do not have any affiliation with either companies nor am I compensated by any means for this post.
Bringing in a couple of keys
But don't touch my bags if you please
Mister Customs Man --From Arlo Guthrie's "Coming Into Los Angeles
As I write this, I’m on the road again…Los Angeles. It’s my good fortune to be attending some collaboration sessions on designs for new Oracle Cloud Applications. Can’t talk about the apps being developed…sorry. But the attendees include Oracle Development, the Oracle User Experience team, several Oracle customers, and a few people from my firm. What I can talk about is some observations about the interaction.
The customers in this group are pretty vocal…a great thing when you’re looking for design feedback. They’re not a shy bunch. What’s interesting to me is their focus of interests. Simply put, they’re not interested in the technology of how the applications work. In the words of one customer addressing Oracle: “that’s your problem now.”
These customers are focused first on outcomes - this is what is important to my organization in this particular subject area, so show how you’ll deliver the outcome we need. And, even more interesting, tell us about final states we have yet to consider that may make my organization better. And, in both cases, what are the explicit metrics that show us that we’ve achieved that end state?
Secondly, they care about integration. How will this new offering integrate with what we already have? And who will maintain those integrations going forward?
Third, please show us what information we’ll get that will help us make better decisions? Much of this discussion has revolved around the context of information obtained rather than simply delivering a batch of generic dashboards. This is where the social aspect of enterprise software comes into play, because it provides context.
From these observations, I personally drew four conclusions:
- If this group of customers is fairly representative of all enterprise software customers, it seems that the evolvement of enterprise software customers from concerns about technology to concerns about quantifiable outcomes is well underway.
- Integration matters. For the moment, customers seem more interested in best of breed solutions rather than purchasing entire platforms. So stitching applications together really matters. While I suspect that, as SaaS continues to evolve, customers will begin to consider enterprise software on a platform basis rather than going with best-of-breed point solutions. But it does not appear that we’re there yet.
- Business intelligence, analytics, big data, whatever…it’s of limited value without context. Customers…at least, these customers, are very interested in learning about their own customer personas and the historical data from those personas in order to predict future behavior.
- User Experience, while not explicitly mentioned during these sessions, has been an implicit requirement. Good UX - attractive, easy to use, elegant applications - are no longer an option. All the customers here expect a great UX and, quite frankly, would not even engage in a product design review without seeing a great UX first.
See now you know what I think I’m seeing and hearing. Thoughts? Opinions? Comments.
I need to change a view and an index on an active production system. I’m concerned that the change will fail with a “ORA-00054: resource busy” error because I’m changing things that are in use. I engaged in a twitter conversation with @FranckPachot and @DBoriented and they gave me the idea of using DDL_LOCK_TIMEOUT with a short timeout to sneak in my changes on our production system. Really, I’m more worried about backing out the changes since I plan to make the change at night when things are quiet. If the changes cause a problem it will be during the middle of the next day. Then I’ll need to sneak in and make the index invisible or drop it and put the original view text back.
I tested setting DDL_LOCK_TIMEOUT to one second at the session level. This is the most conservative setting:
alter session set DDL_LOCK_TIMEOUT=1;
I created a test table with a bunch of rows in it and ran a long updating transaction against it like this:
update /*+ index(test testi) */ test set blocks=blocks+1;
Then I tried to alter the index invisible with the lock timeout:
alter index testi invisible * ERROR at line 1: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
Same error as before. The update of the entire table took a lot longer than 1 second.
Next I tried the same thing with a shorter running update:
update /*+ index(test testi) */ test set blocks=blocks+1 where owner='SYS' and table_name='DUAL'; commit; update /*+ index(test testi) */ test set blocks=blocks+1 where owner='SYS' and table_name='DUAL'; commit; ... lots more of these so script will run for a while...
With the default setting of DDL_LOCK_TIMEOUT=0 my alter index invisible statement usually exited with an ORA-00054 error. But, eventually, I could get it to work. But, with DDL_LOCK_TIMEOUT=1 in my testing my alter almost always worked. I guess in some cases my transaction exceeded the 1 second but usually it did not.
Here is the alter with the timeout:
alter session set DDL_LOCK_TIMEOUT=1; alter index testi invisible;
Once I made the index invisible the update started taking 4 seconds to run. So, to make the index visible again I had to bump the timeout up to 5 seconds:
alter session set DDL_LOCK_TIMEOUT=5; alter index testi visible;
So, if I have to back out these changes at a peak time setting DDL_LOCK_TIMEOUT to a small value should enable me to make the needed changes.
Here is a zip of my scripts if you want to recreate these tests: zip
You need Oracle 11g or later to use DDL_LOCK_TIMEOUT.
These tests were all run on Oracle 188.8.131.52.
Also, I verified that I studied DDL_LOCK_TIMEOUT for my 11g OCP test. I knew it sounded familiar but I have not been using this feature. Either I just forgot or I did not realize how helpful it could be for production changes.
This is a just a quick blog post to direct readers to the best Oracle-related paper detailing the value EMC XtremIO brings to Oracle Database use cases. I’ve been looking forward to the availability of this paper for quite some time as I supported (minimally, really) the EMC Global Solutions Engineering group in this effort. They really did a great job with this testing! I highly recommend this paper for readers who are interested in:
- Leveraging immediate, space efficient, zero overhead storage snapshots for productivity
- All-Flash Array performance
- Database workload consolidation
Click the following link to access the whitepaper: click here. Abstract:
This white paper describes the deployment of the XtremIO® all-flash array with Oracle RAC 11g and 12c databases in both physical and virtual environments. It describes optimal performance while scaling up in a physical environment, the effect of adding multiple virtualized database environments, and the impact of using XtremIO Compression with Oracle Advanced Compression. The white paper also demonstrates the physical space efficiency and low performance impact of XtremIO snapshots.
Filed under: oracle Tagged: Oracle Database performance XtremIO flash, Oracle Performance, Random I/O, XtremIO
This whole thing about “not exists” subqueries can run and run. In the previous episode I walked through some ideas of how the following query might perform depending on the data, the indexes, and the transformation that the optimizer might apply:
select count(*) from t1 w1 where not exists ( select 1 from t1 w2 where w2.x = w1.x and w2.y <> w1.y );
As another participant in the original OTN thread had suggested, however, it might be possible to find a completely different way of writing the query, avoiding the subquery approach completely. In particular there are (probably) several ways that we could write an equivalent query where the table only appears once. In other words, if we restate the requirement we might be able to find a different SQL translation for that requirement.
Looking at the current SQL, it looks like the requirement is: “Count the number of rows in t1 that have values of X that only have one associated value of Y”.
Based on this requirement, the following SQL statements (supplied by two different people) look promising:
WITH counts AS (SELECT x,y,count(*) xy_count FROM t1 GROUP BY x,y) SELECT SUM(x_count) FROM (SELECT x, SUM(xy_count) x_count FROM counts GROUP BY x HAVING COUNT(*) = 1); SELECT SUM(COUNT(*)) FROM t1 GROUP BY x HAVING COUNT(DISTINCT y)<=1
Logically they do seem to address the description of the problem – but there’s a critical difference between these statements and the original. The clue about the difference appears in the absence of any comparisons between columns in the new forms of the query, no t1.colX = t2.colX, no t1.colY != t2.colY, and this might give us an idea about how to test the code. Here’s some test data:
drop table t1 purge; create table t1 ( x number(2,0), y varchar2(10) ); create index t1_i1 on t1(x,y); -- Pick one of the three following pairs of rows insert into t1(x,y) values(1,'a'); insert into t1(x,y) values(1,null); -- insert into t1(x,y) values(null,'a'); -- insert into t1(x,y) values(null,'b'); -- insert into t1(x,y) values(null,'a'); -- insert into t1(x,y) values(null,'a'); commit; -- A pair to be skipped insert into t1(x,y) values(2,'c'); insert into t1(x,y) values(2,'c'); -- A pair to be reported insert into t1(x,y) values(3,'d'); insert into t1(x,y) values(3,'e'); commit; execute dbms_stats.gather_table_stats(user,'t1')
Notice the NULLs – comparisons with NULL lead to rows disappearing, so might the new forms of the query get different results from the old ?
The original query returns a count of 4 rows whichever pair we select from the top 6 inserts.
With the NULL in the Y column the new forms report 2 and 4 rows respectively – so only the second query looks viable.
With the NULLs in the X columns and differing Y columns the new forms report 2 and 2 rows respectively – so even the second query is broken.
However, if we add “or X is null” to the second query it reports 4 rows for both tests.
Finally, having added the “or x is null” predicate, we check that it returns the correct 4 rows for the final test pair – and it does.
It looks as if there is at least one solution to the problem that need only access the table once, though it then does two aggregates (hash group by in 11g). Depending on the data it’s quite likely that this single scan and double hash aggregation will be more efficient than any of the plans that do a scan and filter subquery or scan and hash anti-join. On the other hand the difference in performance might be small, and the ease of comprehension is just a little harder.Footnote:
I can’t help thinking that the “real” requirement is probably as given in the textual restatement of the problem, and that the first rewrite of the query is probably the one that’s producing the “right” answers while the original query is probably producing the “wrong” answer.
When you migrate, you should be prepared to face some execution plan changing. That's not new. But here I'll show you a case where you have several bad execution plans because lot of histograms are missing. The version is the same. The system is the same. You've migrated with DataPump importing all statistics. You have the same automatic job to gather statistics with all default options. You have repeated the migration several times on a system where you constantly reproduce the load. Have done a lot of regression tests. Everything was ok.
SOA & BPM Partner Community Webcast May 8th 16:00 CETJoin us for our monthly SOA & BPM Partner Community Webcast. We will give you an update on our SOA Suite 12c, Integration Cloud Service offerings and our community activities.
Schedule: May 8th 2014 16:00-16:45 CET (Berlin time)
Join Webcast or dial in Call ID: 4070776 and Call Passcode: 333111
Austria: +43 (0) 192 865 12
Belgium: +32 (0) 240 105 28
Denmark: +45 327 292 22
Finland: +358 (0) 923 193 923
France: +33 (0) 15760 2222
Germany: +49 (0) 692 222 161 06
Ireland: +353 (0) 124 756 50
Italy: +39 (0) 236 008 198
Netherlands: +31 (0) 207 143 543
Spain: +34 914 143 755
Sweden: +46 (0) 856 619 465
Switzerland: +41 (0) 445 804 003
UK: +44 (0) 208 118 1001
United States: 140 877 440 73
More Local Numbers Watch and listen
You can join the Conference by clicking on the link: Join Webcast (audio will play over your computer speakers or headset). Visit our SOA Partner Community Technology Webcast series here.
And now back to the Apple Watch content.
If you’ve read here for a while, you might remember we used to be part of the WebCenter development team, and we worked with Oracle Social Network, affectionately OSN.
Noel (@noelportugal) and Anthony (@anthonyslai), having Apple Watch on the brain, misread and rushed to test OSN and its push notifications on the Watch, and then, they finally *read* the email from Chris and checked the Android Wear notifications too.
Both watch platforms look great, as you can see.
Kudos to Chris and OSN, and consider yourself all the Apple Watch-wiser for today.Possibly Related Posts:
- Are We Ready for the Apple Watch?
- Fun with an Android Wear Watch
- The Apple Watch Arrives
- Details on the Oracle Social Network + APEX Integration
- Oracle Social Network Technical Tour
or 2. Using jQuery to do that.
From SalesCloud R9 onwards we now have Activities.. Activities can be tasks, appointments etc. Object Name Activity WSDLhttps://<hostname>:443/appCmmnCompActivitiesActivityManagement/ActivityService?wsdl Version Tested on R9 DescriptionThis payload demonstrates how to create an activity of type TASK, assign a primary lead owner OperationcreateActivity Parameters
Subject Payload <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/crmCommon/activities/activityManagementService/types/" xmlns:act="http://xmlns.oracle.com/apps/crmCommon/activities/activityManagementService/" xmlns:not="http://xmlns.oracle.com/apps/crmCommon/notes/noteService" xmlns:not1="http://xmlns.oracle.com/apps/crmCommon/notes/flex/noteDff/"> <soapenv:Header/> <soapenv:Body> <typ:createActivity> <typ:activity> <!-- Priority = 1,2,3 , high, medium, low --> <act:PriorityCode>1</act:PriorityCode> <act:StatusCode>NOT_STARTED</act:StatusCode> <act:ActivityContact> <!-- Primary contact ID--> <act:ContactId>300000093409168</act:ContactId> </act:ActivityContact> <act:ActivityAssignee> <!-- Party ID of Assignnee --> <act:AssigneeId>300000050989179</act:AssigneeId> </act:ActivityAssignee> <act:ActivityTypeCode>MEETING</act:ActivityTypeCode> <act:ActivityFunctionCode>TASK</act:ActivityFunctionCode> <act:Subject>Test assign to Matt Hooper for Picard</act:Subject> </typ:activity> </typ:createActivity> </soapenv:Body> </soapenv:Envelope>
Webcast: Introducing Oracle Documents Cloud ServiceDriving Improved Productivity and Collaboration for Sales, Marketing, Customer Experience and Operations
A recent survey shows that 89 percent of business managers believe their employees need 24/7 access to core business systems to implement their business strategy. But are current file sync and share solutions up to the mark?
Join Oracle Chief Information Officer and SVP, Mark Sunday, Senior Oracle Product Management and customer executives for a live webcast on Documents Cloud Service – an enterprise-grade, secure and integrated cloud-based content sharing and collaboration offering from Oracle. Find out how your organization can standardize on a corporate/IT-approved cloud solution while meeting the varying business needs of Marketing, Sales, Customer Service, HR, Operations and other departments.
Learn how Oracle Documents Cloud Service is uniquely positioned to:
- Power anytime, anywhere secure access across Web, mobile and desktop
- Mobilize enterprise content without creating information silos
- Drive enterprise-wide collaboration with IT-sanctioned security and controls
10:00 AM PT
/ 1:00 PM ET
#OracleDOCS Featured Speakers:
Chief Information Officer and Senior Vice President, Oracle
Executive Vice President, TekStream Solutions
Program Vice President Content and Digital Media Technologies, IDC Copyright © 2015, Oracle Corporation and/or its affiliates.
Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States
When reading through the release notes of the latest Oracle E-Business Suite R12.2 AD.C.Delta.6 patch in note 1983782.1, I wondered what they meant by “Simplification and enhancement of adop console messages”. I realized what I was missing after I applied the AD.C.Delta6 patch. The format of the console messaged changed drastically. To be honest, the old console messages printed by adop command reminded me of a program where somebody forgot to turn off the debug feature. The old adop console messages are simply not easily readable and looked more like debug messages of a program. AD.C.Delta6 brought in a fresh layout to the console messages, it’s now more readable and easy to follow. You can see for your self by looking at the below snippet:
### AD.C.Delta.5 ### $ adop phase=apply patches=19197270 hotpatch=yes Enter the APPS password: Enter the SYSTEM password: Enter the WLSADMIN password: Please wait. Validating credentials... RUN file system context file: /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml PATCH file system context file: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml Execute SYSTEM command : df /u01/install/VISION/fs1 ************* Start of session ************* version: 12.2.0 started at: Fri Apr 24 2015 13:47:58 APPL_TOP is set to /u01/install/VISION/fs2/EBSapps/appl [START 2015/04/24 13:48:04] Check if services are down [STATEMENT] Application services are down. [END 2015/04/24 13:48:09] Check if services are down [EVENT] [START 2015/04/24 13:48:09] Checking the DB parameter value [EVENT] [END 2015/04/24 13:48:11] Checking the DB parameter value Using ADOP Session ID from currently incomplete patching cycle [START 2015/04/24 13:48:23] adzdoptl.pl run ADOP Session ID: 12 Phase: apply Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log [START 2015/04/24 13:48:30] apply phase Calling: adpatch workers=4 options=hotpatch console=no interactive=no defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs_ne/EBSapps/patch/19197270 driver=u19197270.drv logfile=u19197270.log ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/19197270/log [EVENT] [START 2015/04/24 13:59:45] Running finalize since in hotpatch mode [EVENT] [END 2015/04/24 14:00:10] Running finalize since in hotpatch mode Calling: adpatch options=hotpatch,nocompiledb interactive=no console=no workers=4 restart=no abandon=yes defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/patch/115/driver logfile=cutover.log driver=ucutover.drv ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/log [EVENT] [START 2015/04/24 14:01:32] Running cutover since in hotpatch mode [EVENT] [END 2015/04/24 14:01:33] Running cutover since in hotpatch mode [END 2015/04/24 14:01:36] apply phase [START 2015/04/24 14:01:36] Generating Post Apply Reports [EVENT] [START 2015/04/24 14:01:38] Generating AD_ZD_LOGS Report [EVENT] Report: /u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/sql/ADZDSHOWLOG.sql [EVENT] Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/adzdshowlog.out [EVENT] [END 2015/04/24 14:01:42] Generating AD_ZD_LOGS Report [END 2015/04/24 14:01:42] Generating Post Apply Reports [END 2015/04/24 14:01:46] adzdoptl.pl run adop phase=apply - Completed Successfully Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log adop exiting with status = 0 (Success)
### AD.C.Delta.6 ### $ adop phase=apply patches=19330775 hotpatch=yes Enter the APPS password: Enter the SYSTEM password: Enter the WLSADMIN password: Validating credentials... Initializing... Run Edition context : /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml Patch edition context: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml Reading driver file (up to 50000000 bytes). Patch file system freespace: 181.66 GB Validating system setup... Node registry is valid. Application services are down. [WARNING] ETCC: The following database fixes are not applied in node ebs 14046443 14255128 16299727 16359751 17250794 17401353 18260550 18282562 18331812 18331850 18440047 18689530 18730542 18828868 19393542 19472320 19487147 19791273 19896336 19949371 20294666 Refer to My Oracle Support Knowledge Document 1594274.1 for instructions. Checking for pending adop sessions... Continuing with the existing session [Session id: 12]... =========================================================================== ADOP (C.Delta.6) Session ID: 12 Node: ebs Phase: apply Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_140643.log =========================================================================== Applying patch 19330775 with adpatch utility... Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/19330775/log/u19330775.log Running finalize actions for the patches applied... Log: @ADZDSHOWLOG.sql "2015/04/24 14:15:09" Running cutover actions for the patches applied... Spawning adpatch parallel workers to process CUTOVER DDLs in parallel Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/log/cutover.log Performing database cutover in QUICK mode Generating post apply reports... Generating log report... Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/adzdshowlog.out adop phase=apply - Completed Successfully adop exiting with status = 0 (Success)
So what are you waiting for fellow Apps DBAs? Go ahead, apply the new AD Delta update to your R12.2 EBS instances. I am really eager to try out other AD.C.Delta6 new features, especially “Online Patching support for single file system on development or test systems”