Feed aggregator

SQL Server 2016 SP1 and unlocked enterprise features!

Yann Neuhaus - Wed, 2016-11-16 11:49

Starting with the release of SQL Server 2016 SP1, you will probably notice that the standard edition will benefit from a lot of features previously available only on Enterprise edition. Such features concern different areas as performance, Data Warehousing and Security features. Yes Sir, this is not a joke and definitly a good reason for customers who want to move on SQL Server 2016!

 

sql2016sp1

Of course we may expect some limitations in terms of scalability with some features like In-Memory tables or Columnstore but that’s not so bad!

Microsoft wants a consistent Programming Surface Area and in terms of licences we may find now a similarity with Azure licensing where the edition choice is more related on requested resources than features.

Stay tuned!

 

 

Cet article SQL Server 2016 SP1 and unlocked enterprise features! est apparu en premier sur Blog dbi services.

Run SQL Server everywhere!

Yann Neuhaus - Wed, 2016-11-16 11:16

Running SQL Server everywhere is not a dream anymore!  Microsoft has just announced the first release of SQL Server CTP1 on Linux today!

sqlserverlinux

But that’s not all! SQL Server Vnext is also available on Windows, Mac (via Docker), on a physical machine, virtual machine and on a docker container as well … So everywhere you want!

Enjoy!

 

 

 

Cet article Run SQL Server everywhere! est apparu en premier sur Blog dbi services.

HelloSign for Oracle Documents Cloud for Employee Onboarding

WebCenter Team - Wed, 2016-11-16 10:59
Authored by: Sarah Gabot, Demand Generation Manager, HelloSign 

Employees starting a new job often need to review several documents when starting a new job: I-9, W-4s, employee handbooks, insurance forms, direct deposit forms, etc. 

All of this can be alleviated by moving your onboarding documents over to a cloud solution like Oracle Documents Cloud Service, coupled with an eSignature solution like HelloSign

Why should I move onboarding to the Cloud? 

Before you write off using eSignatures for onboarding, consider the benefits: 
  • Save money. When you make paper copies of your onboarding documents, it costs money to use these materials--not to mention you also spare administrative costs and storage costs. When you use cloud storage and eSignatures, you eliminate the costs associated with them
  • More accurate paperwork. Having to sift through paperwork with a lot of fine print isn’t an easy task. Not to mention, employees also need to make sure they initial or sign off on nearly everything. It can be easy to miss something. When using an eSignature solution, employers can make certain fields on the document required to fill out so that it can’t be submitted incomplete. The end result: Fewer mistakes!
  • Better candidate experience. New employees will overall have a better experience when you use cloud storage and eSignatures for onboarding. They’ll appreciate that you’re not handing them a stack of paperwork on their first day and that they’ll be able to sail through their onboarding documents online. 
  • Cut the paperwork clutter. This one is an obvious one. When you move to online document storage like Oracle Documents Cloud Service, you don’t have to worry about filing and managing paper copies of employee paperwork. 
  • It’s greener. When you run copies of paperwork for onboarding, you use a lot of materials to put these together: paper, paper clips, staples, folders, etc. Using cloud documents will prevent you from having to use these materials, and deleting documents is done in a few clicks. 
Suggested Onboarding Documents

Here are a few suggested documents you can move over to Oracle Documents Cloud: 
  • Employee handbook
  • W-4
  • I-9
  • Insurance paperwork
  • Direct Deposit
  • NDA
Companies like Instacart, a same-day grocery delivery service, use HelloSign’s eSignatures as part of their contractor onboarding flow. Companies who are looking to move their onboarding documents into the cloud, consider coupling Oracle Documents Cloud with HelloSign. For more information, contact your Oracle Sales rep. 

Number of rows of a value from an unbounded, ignore nulls NEXT_VALUE or FIRST_VALUE

Tom Kyte - Wed, 2016-11-16 08:46
I understand the sql in the LiveSQL link. What I am wondering is how I find the number of rows the analytic function has to look to get the last/next value if IGNORE NULLS is part of the statement. If I look at the second row (REPORT_MONTH = 01-FE...
Categories: DBA Blogs

Synchronize specific data based on date using DBMS_COMPARISON

Tom Kyte - Wed, 2016-11-16 08:46
Hi Tom, I have already synchronizing data successfully between remote table and local table using DBMS_COMPARISON with scan mode FULL. But the synchronization performance very slow as of data growth. When I read the documentation, there is n...
Categories: DBA Blogs

database block and redo blocks

Tom Kyte - Wed, 2016-11-16 08:46
1.how the same blocks are copy/exist(while update transactions) in database block buffer and redolog buffer 2.which one (DBBC and RLB) is first got activated
Categories: DBA Blogs

Mismatch in Execute and Fetch of execution plan of SQL

Tom Kyte - Wed, 2016-11-16 08:46
Hi, I have below trace information for a SQL in tkprof trace file call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 0...
Categories: DBA Blogs

Alter table add columns provided from difference of columns from another table.

Tom Kyte - Wed, 2016-11-16 08:46
Hi, There are two tables A and B with identical columns at first. B is a backup table of A that updated from time to time. Table A were altered and added some columns with varchar2/date data types. I can get the columns, data types and length...
Categories: DBA Blogs

Before Delete trigger and prevent deletion. Track the entry in EVENT_DETAILS table

Tom Kyte - Wed, 2016-11-16 08:46
Good Afternoon Tom..!! My requirement is when user trying to delete a record from T1 table. If an attempt to delete an entry from T1 is captured, create an entry in the EVENT_DETAILS and the record will not be deleted from T1. When rollback, EVENT_D...
Categories: DBA Blogs

Roll back

Tom Kyte - Wed, 2016-11-16 08:46
Hi, I would like to know about rollback and commit.do they commit or roll back the current session data or all the current session data on data base? For example a user inserted 10 rows and b user inserted 15 rows and b performed roll back then is...
Categories: DBA Blogs

Partitioning for Existing tables in 12c

Tom Kyte - Wed, 2016-11-16 08:46
Dear Experts, Thanks a lot for your advise and support to the needy. I am in process of partitioning on DATE INTERVAL on existing table which holding the data. 1. Application team wants to partition for future data and no need to partition t...
Categories: DBA Blogs

Caching in memory

Tom Kyte - Wed, 2016-11-16 08:46
I would like to know how much amount of data gets stored in the buffer cache? Say, if I have huge amounts of RAM on the server, and say If I increase SGA proportionately, will my buffer cache high amount of data near to that of RAM allocated? What...
Categories: DBA Blogs

oracle form builder with pl/sql

Tom Kyte - Wed, 2016-11-16 08:46
Hello tom, I have a comprhensively large data in pl/sql for which i created a view for report purposes I am using oracle form builder for the first time. I wanted to show user data from my view based on the date range he/she selects Can you h...
Categories: DBA Blogs

Transferring default domains for SQL Developer Data Modeller

Jeff Moss - Wed, 2016-11-16 08:46
*{-webkit-box-sizing: border-box;-moz-box-sizing: border-box;box-sizing: border-box}div{display: block}.kc-container{width:100%;max-width:1170px;margin:0 auto;padding-left:15px;padding-right:15px;box-sizing:border-box}.kc-row-container:not(.kc-container){padding-left:0;padding-right:0;width:100%;max-width:100%}.kc-elm{float: left;width: 100%;}.kc_wrap-video-bg{height:100%;left:0;overflow:hidden;pointer-events:none;position:absolute;top:0;width:100%;z-index:0}.kc_single_image img{max-width:100%}.kc-video-bg .kc_column{position:relative}.kc-infinite-loop{text-align:center;padding:50px;font-size:18px;color:red;width:100%;display:inline-block}.kc_row:not(.kc_row_inner){clear:both;display:block;width:100%}.kc-wrap-columns,.kc_row_inner{margin-left:-15px;margin-right:-15px;clear:both}.kc_row.kc_row_inner{width: calc(100% + 30px);}.kc_tab_content>.kc_row_inner{width:100%;margin:0}.kc_column,.kc_column_inner{min-height:1px;position:relative;padding-right:15px;padding-left:15px;width:100%;float:left}div.kc_column,div.kc_column_inner{clear:none}div[data-kc-fullheight]{min-height:100vh}html body div[data-kc-parallax=true]{background-position:50% 0;background-size:100%!important;background-repeat:no-repeat!important;background-attachment:fixed!important}div[data-kc-fullwidth]{margin-left:0!important;margin-right:0!important;position:relative;box-sizing:content-box}.kc_text_block{display:inline-block;clear:both;width:100%}@media screen and (min-width:999px){body div[data-kc-equalheight=true],body div[data-kc-equalheight=true]>.kc-container{display:-webkit-flex!important;display:-ms-flexbox!important;display:flex!important}body div[data-kc-equalheight-align=middle]>.kc-container>.kc-wrap-columns>.kc_column>.kc-col-container{display:-webkit-flex!important;display:-ms-flexbox!important;display:flex!important;align-items:center;flex-wrap:wrap;justify-content:center;height:100%}body div[data-kc-equalheight-align=bottom]>.kc-container>.kc-wrap-columns>.kc_column>.kc-col-container{display:-webkit-flex!important;display:-ms-flexbox!important;display:flex!important;align-items:flex-end;flex-wrap:wrap;justify-content:center;height:100%}body div[data-kc-fullheight=middle-content],body div[data-kc-fullheight=middle-content]>.kc-container{display:-webkit-flex;display:-ms-flexbox;display:flex;align-items:center}.kc-wrap-columns,.kc_row_inner{display:-webkit-flex;display:-ms-flexbox;display:flex}.kc_row_inner, .kc-row-container.kc-container .kc-wrap-columns{width: calc(100% + 30px)}}@media screen and (max-width: 767px){html body .kc_column,html body .kc_column_inner{width: 100%}div.kc_row{display: block}}@media screen and (max-width: 999px){.kc_col-sm-3, div.kc_col-of-5{width: 50%}}.kc_col-sm-1{width: 8.33333%}.kc_col-sm-2{width: 16.6667%}div.kc_col-of-5{width: 20%;float: left}.kc_col-sm-3{width: 25%}.kc_col-sm-4{width: 33.3333%}.kc_col-sm-5{width: 41.6667%}.kc_col-sm-6{width: 50%}.kc_col-sm-7{width: 58.3333%}.kc_col-sm-8{width: 66.6667%}.kc_col-sm-9{width: 75%}.kc_col-sm-10{width: 83.3333%}.kc_col-sm-11{width: 91.6667%}.kc_col-sm-12{width: 100%}.kc-off-notice{display:none;}

Notice: You are using wrong way to display KC Content, Correct It Now

I got a new laptop the other day and installed all the software, including SQL Developer Data Modeller all fine. I then opened a model which had a bunch of tables with columns based off Domains…the columns did not have Domains against them but had been replaced with Logical data types instead.

After some research, the fix, in this case, involved copying the file “defaultdomains.xml” from the following directory on my old laptop, to the same place on the new laptop:

%SQL Developer Home%\sqldeveloper\extensions\oracle.datamodeler\types

After restarting and reopening the model all was back to normal.

What I probably could have done in the first place was to have created my own Domains file for the Design, saved in the Design folder and then when I transferred the Design by copying across the Design folder the domains would have come with it. I could have then just opened the Domain file on the new laptop. I guess it depends on whether I would want these domains to be Design specific or part of the defaults for all designs.

SGMB_URL = "http://www.oramoss.com/wp-content/plugins/social-media-builder/"; jQuery(".dropdownWrapper").hide();
jQuery(".socialMediaOnEveryPost").addClass("sgmb-left")

12cR2: CREATE_FILE_DEST for PDB isolation

Yann Neuhaus - Wed, 2016-11-16 03:32

Two years ago I filled an OTN idea to ‘Constrain PDB datafiles into specific directory’ and made it an enhancement request for 12c Release 2. When you provision a PDB, the PDB admin can create tablespaces and put datafiles anywhere in your system. Of course this is not acceptable in a cloud environment. 12.1 has a parameter for directories (PATH_PREFIX) and 12.2 brings CREATE_FILE_DEST for datafiles.

create_file_dest

Here is the new option when you create a pluggable database:


SQL> create pluggable database PDB1 admin user admin identified by password role=(DBA)
create_file_dest='/u02/app/oracle/oradata/CDB2/PDB1';
 
Pluggable database created.

Let’s see where are my datafiles:


SQL> alter pluggable database PDB1 open;
Pluggable database altered.
SQL> alter session set container=PDB1;
Session altered.
SQL> select name from v$datafile;
 
NAME
--------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_system_d2od2o7b_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_sysaux_d2od2o7j_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_undotbs1_d2od2o7l_.dbf

My files have been created in the CREATE_FILE_DEST directory specified at PDB creation, and with an OMF structure.
So maybe I don’t want to include the CDB name and the PDB name but only a mount point.

If, as a local user, I try to create a datafile elsewhere I get an error:

SQL> connect admin/password@//localhost/pdb1.opcoct.oraclecloud.internal
Connected.
SQL> create tablespace APPDATA datafile '/tmp/appdata.dbf' size 5M;
create tablespace APPDATA datafile '/tmp/appdata.dbf' size 5M
*
ERROR at line 1:
ORA-65250: invalid path specified for file - /tmp/appdata.dbf

This is exactly what I wanted.

Because I’m bound to this directory, I don’t need to give an absolute path:

SQL> create tablespace APPDATA datafile 'appdata.dbf' size 5M;
 
Tablespace created.
 
SQL> select name from v$datafile;
 
NAME
-------------------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_system_d2od2o7b_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_sysaux_d2od2o7j_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_undotbs1_d2od2o7l_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/appdata.dbf

So you don’t need to use OMF there. If the PDB administrator wants to name the datafiles, he can, as long as they stays under the create_file_dest directory. You can create a datafile in a sub-directory of create_file_dest but it needs to exist of course.

db_create_file_dest

Here it just looks like OMF, so I check the db_create_file_dest parameter:


SQL> show parameter file_dest
 
NAME TYPE VALUE
------------------------------------ ----------- ---------------------------------
db_create_file_dest string /u02/app/oracle/oradata/CDB2/PDB1

and I try to change it (as local user):


SQL> connect admin/password@//localhost/pdb1.opcoct.oraclecloud.internal;
Connected.
SQL> alter system set db_create_file_dest='/tmp';
alter system set db_create_file_dest='/tmp'
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-01031: insufficient privileges
 
SQL> alter session set db_create_file_dest='/tmp';
ERROR:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-01031: insufficient privileges

No need to use lockdown profile here, it is verified at runtime that a local user cannot change it.

If you are connected with a common user, here connected as sysdba, this is the way to change what has been specified at PDB creation time:


SQL> show con_id
 
CON_ID
------------------------------
3
 
SQL> alter system set db_create_file_dest='/tmp';
System altered.
 
SQL> create tablespace APP1;
Tablespace created.
 
SQL> select name from v$datafile;
 
NAME
--------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_system_d2od2o7b_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_sysaux_d2od2o7j_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_undotbs1_d2od2o7l_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/appdata.dbf
/tmp/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_app1_d2ohx5sp_.dbf

But…

The behavior when you create the PDB with the CREATE_FILE_DEST clause is different than when you create it without, and set db_create_file_dest later. In the second case, the restriction does not occur and a local DBA can create a datafile wherever he wants.

So I wanted to check whether this attribute is shipped when plugging PDBs. When looking at the pdb_descr_file xml file I don’t see anything different except the parameter:

<parameters>
<parameter>processes=300
<parameter>nls_language='AMERICAN'
<parameter>nls_territory='AMERICA'
<parameter>filesystemio_options='setall'
<parameter>db_block_size=8192
<parameter>encrypt_new_tablespaces='CLOUD_ONLY'
<parameter>compatible='12.2.0'
<parameter>db_files=250
<parameter>open_cursors=300
<parameter>sql92_security=TRUE
<parameter>pga_aggregate_target=1775294400
<parameter>sec_protocol_error_trace_action='LOG'
<parameter>enable_pluggable_database=TRUE
<spfile>*.db_create_file_dest='/u02/app/oracle/oradata/CDB2/PDB1'
</parameters>

So I tried to unplug/plug my PDB and the restriction is gone. So be careful.

I’ve not find a documented way to check if restriction is enabled or not (except trying to create a file outside of db_create_file_dest). Please comment if you know.
However, it seems that that a flag in CONTAINER$ is unset when restriction is there:

SQL> create pluggable database PDB1 admin user admin identified by password role=(DBA) create_file_dest='/u02/app/oracle/oradata/CDB2/PDB1';
Pluggable database created.
 
SQL> select con_id#,flags,decode(bitand(flags, 2147483648), 2147483648, 'YES', 'NO') from container$;
 
CON_ID# FLAGS DEC
---------- ---------- ---
1 0 NO
2 3221487616 YES
3 1610874880 NO

Creating the same PDB but without the create_file_dest clause has the same flag as ‘NO’

create pluggable database PDB1 admin user admin identified by password role=(DBA);
Pluggable database created.
 
SQL> select con_id#,flags,decode(bitand(flags, 2147483648), 2147483648, 'YES', 'NO') from container$;
 
CON_ID# FLAGS DEC
---------- ---------- ---
1 0 NO
2 3221487616 YES
3 1074003968 NO

I suppose that it is stored elsewhere because those flags are set only once PDB is opened.

 

Cet article 12cR2: CREATE_FILE_DEST for PDB isolation est apparu en premier sur Blog dbi services.

Creating Oracle Big Data Lite VM on Proxmox

Jeff Moss - Wed, 2016-11-16 02:42

The Oracle Big Data Lite VM available on Oracle technet, provides a pre built environment for learning about a number of key Oracle products, including Oracle 12c database, Big Data Discovery and Data integrator as well as Cloudera Distribution – Apache Hadoop (CDH 5.8.0).

The download ultimately delivers an OVA “appliance” file for use with Oracle VirtualBox, but there isn’t anything to stop you running this as a VM on proxmox 4, with a bit of effort, as follows.

NOTE – Things to read which can help with this process:

  1. Oracle Big Data Lite Deployment Guide.
  2. How to upload an OVA to proxmox guide by James Coyle: https://www.jamescoyle.net/how-to/1218-upload-ova-to-proxmox-kvm
  3. Converting to RAW and pushing to a raw lvm partition: https://www.nnbfn.net/2011/03/convert-kvm-qcow2-to-lvm-raw-partition/
  • Firstly download the files that make up the OVA from here.
  • Follow the instructions on the download page to convert the multiple files into one single OVA file.
  • For Oracle Virtualbox, simple follow the rest of the instructions in the Deployment Guide.
  • For Proxmox, where I was running LVM storage for the virtual machines, first rename the single OVA file to .ISO, then upload that file (BigDataLite460.iso) to a storage area on your proxmox host, in this case, mine was called “data”. You can upload the file through the Proxmox GUI, or manually via the command line. My files were uploaded through the GUI and end up in “/mnt/pve-data/template/iso”.
  • Now, bring up a shell and navigate to the ISO directory and then unpack the ISO file by running “tar xvf BigDataLite460.iso”. This should create five files which include one OVF file (Open Virtualisation Format) and four VMDK files (Virtual Machine Disk).
root@HP20052433:/mnt/pve-data/template/iso# ls -l
total 204127600
-rw------- 1 root root   8680527872 Oct 25 02:43 BigDataLite460-disk1.vmdk
-rw------- 1 root root   1696855040 Oct 25 02:45 BigDataLite460-disk2.vmdk
-rw------- 1 root root  23999689216 Oct 25 03:11 BigDataLite460-disk3.vmdk
-rw------- 1 root root       220160 Oct 25 03:11 BigDataLite460-disk4.vmdk
-rw-r--r-- 1 root root  34377315328 Nov 14 10:59 BigDataLite460.iso
-rw------- 1 root root        20056 Oct 25 02:31 BigDataLite460.ovf
  • Now, create a new VM in proxmox via the GUI or manually. The VM I created had the required memory and CPUs as per the deployment guide, together with four Hard Disks – mine were all on the SCSI interface and were set to be 10G in size initially – this will change later.
  • The hard disks were using a storage area on Proxmox that was defined as type LVM.
  • Now convert the VMDK files to RAW files which we’ll then push to the LVM Hard Disks as follows:
qemu-img convert -f vmdk BigDataLite460-disk1.vmdk -O raw BigDataLite460-disk1.raw
qemu-img convert -f vmdk BigDataLite460-disk2.vmdk -O raw BigDataLite460-disk2.raw
qemu-img convert -f vmdk BigDataLite460-disk3.vmdk -O raw BigDataLite460-disk3.raw
qemu-img convert -f vmdk BigDataLite460-disk4.vmdk -O raw BigDataLite460-disk4.raw
  • Now list those raw files, so we can see their sizes:
root@HP20052433:/mnt/pve-data/template/iso# ls -l *.raw
-rw-r--r-- 1 root root 104857600000 Nov 16 07:58 BigDataLite460-disk1.raw
-rw-r--r-- 1 root root 214748364800 Nov 16 08:01 BigDataLite460-disk2.raw
-rw-r--r-- 1 root root 128849018880 Nov 16 08:27 BigDataLite460-disk3.raw
-rw-r--r-- 1 root root  32212254720 Nov 16 08:27 BigDataLite460-disk4.raw
  • Now resize the lvm hard disks to the corresponding sizes (the ID of my proxmox VM was 106 and my hard disks were scsi):
qm resize 106 scsi0 104857600000
qm resize 106 scsi1 214748364800
qm resize 106 scsi2 128849018880
qm resize 106 scsi3 32212254720
  • Now copy over the content of the raw files to the corresponding lvm hard disks:
dd if=BigDataLite460-disk1.raw of=/dev/vm_storage_group/vm-106-disk-1
dd if=BigDataLite460-disk2.raw of=/dev/vm_storage_group/vm-106-disk-2
dd if=BigDataLite460-disk3.raw of=/dev/vm_storage_group/vm-106-disk-3
dd if=BigDataLite460-disk4.raw of=/dev/vm_storage_group/vm-106-disk-4
  • Now start the VM and hey presto there it is.
  • You could stop there as it’s a self contained environment, but obviously you can also do a whole bunch of networking stuff to make it visible on your network as well.
SGMB_URL = "http://www.oramoss.com/wp-content/plugins/social-media-builder/"; jQuery(".dropdownWrapper").hide();
jQuery(".socialMediaOnEveryPost").addClass("sgmb-left")

E-Business Suite Technology Codelevel Checker Updated for EBS 12.2

Steven Chan - Wed, 2016-11-16 02:05

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify missing application tier or database bugfixes that need to be applied to your E-Business Suite Release 12.2 system. ETCC maps missing bugfixes to the default corresponding patches and displays them in a patch recommendation summary.

What's New

ETCC was recently updated to include bug fixes and patching combinations for the following:

  • October 2016 WebLogic Server (WLS) Patch Set Update (PSU)
  • October 2016 Database Patch Set Update and Bundle Patch
  • July 2016 Database Patch Set Update and Bundle Patch
  • July 2016 Database Cloud Service (DBCS) / Exadata Cloud Service (ExaCS) service

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

Related Articles

References

Categories: APPS Blogs

difference between varchar2(10) and varchar2(10 char) in oracle

Tom Kyte - Tue, 2016-11-15 14:26
Hi team, Could you please explain the difference between the below two data types : difference between varchar2(10) and varchar2(10 char) in oracle asktom I know varcha2(10 char) , we can use in multibyte characters. So could you pleas eexp...
Categories: DBA Blogs

Backup and restore

Tom Kyte - Tue, 2016-11-15 14:26
Hi team, I have 11g Production database logical backup export and When development team required we import it into the reporting database. But now we have migrated 11g database to 12c with ASM and same logical backup export is every day. So, ...
Categories: DBA Blogs

Building a Simple H1 Component with Settings Panel

WebCenter Team - Tue, 2016-11-15 13:30
Authored by: Igor Polyakov, Senior Principal Product Manager, Oracle

In this 5 step tutorial, you will learn how to create a minimal Oracle Sites Cloud Service (SCS) component that has a simple HTML template and CSS. Component that you create will have a simple Settings Panel and an entry for the theme design.json to allow other SCS users to pick from 3 built-in styles when using H1 Component in the Site Builder.

When you create new component in SCS, you get a set of seeded files that will work-out-of-the-box. The seeded code covers most of the functionality of a component within the product and the "Tutorial: Developing Components with Knockout" section in the SCS documentation explains how all the pieces of the components fit together.

In this tutorial, I will explain how to change the seeded code to create your own component that will require only a small subset of seeded code to achieve the end result.

Step 1: Create New Component 
After this step you will have created your component with the Sites Cloud Service that you can immediately drop onto the page. This is the starting point for creating any new component.

To create a local component: 
1. Navigate to Sites -> Components 
2. Select option Create -> Create Local Component 
3. Enter a name, for example “BasicTextEditor” and optionally, description 
4. Click "Create" to create new component 

Checkpoint 1 
Now that you have successfully created a component, you should see this component in the Component catalog as well as in the Add > Custom component palette for any site you create. Use the following steps to validate your component creation: 
1. Create a new site using any seeded Template, for example create a site called “ComponentTest” using the “StarterTemplate" template. 
2. Select Edit option and create an update for the site to open it in the Site Builder 
3. Edit a page within the site that you created
4. Click on the Add ( "+" ) button on the left bar and select "Custom" for the list of custom components
5. Select the "H1_Component" from the custom component Palette and drop it onto the page.

You should now see a default rendering for the local component you created 

6. Select the context menu in the banner for the component you dropped
7. Select "Settings" from the drop-down menu. You can change setting to see how seeded component rendering will change. 

In the following 2-5 steps I will describe how you can modify seeded files to create a new custom component and how to modify it for your own purposes. You can read on here.

Pages

Subscribe to Oracle FAQ aggregator